00:00:00.002 Started by upstream project "autotest-nightly" build number 4303 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3666 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.033 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.034 The recommended git tool is: git 00:00:00.035 using credential 00000000-0000-0000-0000-000000000002 00:00:00.037 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.086 Fetching changes from the remote Git repository 00:00:00.091 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.106 Using shallow fetch with depth 1 00:00:00.106 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.106 > git --version # timeout=10 00:00:00.127 > git --version # 'git version 2.39.2' 00:00:00.127 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.155 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.155 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.008 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.018 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.030 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.030 > git config core.sparsecheckout # timeout=10 00:00:03.042 > git read-tree -mu HEAD # timeout=10 00:00:03.055 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.074 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.074 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.156 [Pipeline] Start of Pipeline 00:00:03.170 [Pipeline] library 00:00:03.172 Loading library shm_lib@master 00:00:03.172 Library shm_lib@master is cached. Copying from home. 00:00:03.187 [Pipeline] node 00:00:03.201 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.203 [Pipeline] { 00:00:03.213 [Pipeline] catchError 00:00:03.215 [Pipeline] { 00:00:03.230 [Pipeline] wrap 00:00:03.241 [Pipeline] { 00:00:03.251 [Pipeline] stage 00:00:03.253 [Pipeline] { (Prologue) 00:00:03.275 [Pipeline] echo 00:00:03.277 Node: VM-host-WFP7 00:00:03.285 [Pipeline] cleanWs 00:00:03.297 [WS-CLEANUP] Deleting project workspace... 00:00:03.297 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.305 [WS-CLEANUP] done 00:00:03.521 [Pipeline] setCustomBuildProperty 00:00:03.610 [Pipeline] httpRequest 00:00:03.926 [Pipeline] echo 00:00:03.928 Sorcerer 10.211.164.20 is alive 00:00:03.937 [Pipeline] retry 00:00:03.938 [Pipeline] { 00:00:03.953 [Pipeline] httpRequest 00:00:03.957 HttpMethod: GET 00:00:03.958 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.958 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.959 Response Code: HTTP/1.1 200 OK 00:00:03.959 Success: Status code 200 is in the accepted range: 200,404 00:00:03.960 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.106 [Pipeline] } 00:00:04.117 [Pipeline] // retry 00:00:04.121 [Pipeline] sh 00:00:04.401 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.414 [Pipeline] httpRequest 00:00:05.261 [Pipeline] echo 00:00:05.263 Sorcerer 10.211.164.20 is alive 00:00:05.270 [Pipeline] retry 00:00:05.273 [Pipeline] { 00:00:05.285 [Pipeline] httpRequest 00:00:05.289 HttpMethod: GET 00:00:05.290 URL: http://10.211.164.20/packages/spdk_ff2e6bfe4247e04e9994253f61b3a5501e7a42aa.tar.gz 00:00:05.290 Sending request to url: http://10.211.164.20/packages/spdk_ff2e6bfe4247e04e9994253f61b3a5501e7a42aa.tar.gz 00:00:05.296 Response Code: HTTP/1.1 200 OK 00:00:05.296 Success: Status code 200 is in the accepted range: 200,404 00:00:05.297 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_ff2e6bfe4247e04e9994253f61b3a5501e7a42aa.tar.gz 00:01:27.379 [Pipeline] } 00:01:27.397 [Pipeline] // retry 00:01:27.405 [Pipeline] sh 00:01:27.697 + tar --no-same-owner -xf spdk_ff2e6bfe4247e04e9994253f61b3a5501e7a42aa.tar.gz 00:01:30.257 [Pipeline] sh 00:01:30.583 + git -C spdk log --oneline -n5 00:01:30.583 ff2e6bfe4 lib/lvol: cluster size must be a multiple of bs_dev->blocklen 00:01:30.583 9885e1d29 lib/blob: cluster_sz must be a multiple of PAGE 00:01:30.583 9a6847636 bdev/nvme: Fix spdk_bdev_nvme_create() 00:01:30.583 8bbc7b697 nvmf: Block ctrlr-only admin cmds if NSID is set 00:01:30.583 d66a1e46f test/nvme/interrupt: Verify pre|post IO cpu load 00:01:30.603 [Pipeline] writeFile 00:01:30.618 [Pipeline] sh 00:01:30.901 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:30.914 [Pipeline] sh 00:01:31.199 + cat autorun-spdk.conf 00:01:31.199 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.200 SPDK_RUN_ASAN=1 00:01:31.200 SPDK_RUN_UBSAN=1 00:01:31.200 SPDK_TEST_RAID=1 00:01:31.200 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.207 RUN_NIGHTLY=1 00:01:31.209 [Pipeline] } 00:01:31.223 [Pipeline] // stage 00:01:31.239 [Pipeline] stage 00:01:31.241 [Pipeline] { (Run VM) 00:01:31.254 [Pipeline] sh 00:01:31.537 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:31.537 + echo 'Start stage prepare_nvme.sh' 00:01:31.537 Start stage prepare_nvme.sh 00:01:31.537 + [[ -n 7 ]] 00:01:31.537 + disk_prefix=ex7 00:01:31.537 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:31.537 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:31.537 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:31.537 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.537 ++ SPDK_RUN_ASAN=1 00:01:31.537 ++ SPDK_RUN_UBSAN=1 00:01:31.537 ++ SPDK_TEST_RAID=1 00:01:31.537 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.537 ++ RUN_NIGHTLY=1 00:01:31.537 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:31.537 + nvme_files=() 00:01:31.537 + declare -A nvme_files 00:01:31.537 + backend_dir=/var/lib/libvirt/images/backends 00:01:31.537 + nvme_files['nvme.img']=5G 00:01:31.537 + nvme_files['nvme-cmb.img']=5G 00:01:31.537 + nvme_files['nvme-multi0.img']=4G 00:01:31.537 + nvme_files['nvme-multi1.img']=4G 00:01:31.537 + nvme_files['nvme-multi2.img']=4G 00:01:31.537 + nvme_files['nvme-openstack.img']=8G 00:01:31.537 + nvme_files['nvme-zns.img']=5G 00:01:31.537 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:31.537 + (( SPDK_TEST_FTL == 1 )) 00:01:31.537 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:31.537 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:31.537 + for nvme in "${!nvme_files[@]}" 00:01:31.537 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:31.537 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.537 + for nvme in "${!nvme_files[@]}" 00:01:31.537 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:31.537 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.537 + for nvme in "${!nvme_files[@]}" 00:01:31.537 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:31.537 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:31.537 + for nvme in "${!nvme_files[@]}" 00:01:31.537 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:31.537 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.537 + for nvme in "${!nvme_files[@]}" 00:01:31.537 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:31.537 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.537 + for nvme in "${!nvme_files[@]}" 00:01:31.537 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:31.537 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.537 + for nvme in "${!nvme_files[@]}" 00:01:31.537 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:31.797 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.797 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:31.797 + echo 'End stage prepare_nvme.sh' 00:01:31.797 End stage prepare_nvme.sh 00:01:31.810 [Pipeline] sh 00:01:32.095 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:32.095 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:01:32.095 00:01:32.095 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:32.095 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:32.095 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:32.095 HELP=0 00:01:32.095 DRY_RUN=0 00:01:32.095 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:01:32.095 NVME_DISKS_TYPE=nvme,nvme, 00:01:32.095 NVME_AUTO_CREATE=0 00:01:32.095 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:01:32.095 NVME_CMB=,, 00:01:32.095 NVME_PMR=,, 00:01:32.095 NVME_ZNS=,, 00:01:32.095 NVME_MS=,, 00:01:32.095 NVME_FDP=,, 00:01:32.095 SPDK_VAGRANT_DISTRO=fedora39 00:01:32.095 SPDK_VAGRANT_VMCPU=10 00:01:32.095 SPDK_VAGRANT_VMRAM=12288 00:01:32.095 SPDK_VAGRANT_PROVIDER=libvirt 00:01:32.095 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:32.095 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:32.095 SPDK_OPENSTACK_NETWORK=0 00:01:32.095 VAGRANT_PACKAGE_BOX=0 00:01:32.095 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:32.095 FORCE_DISTRO=true 00:01:32.095 VAGRANT_BOX_VERSION= 00:01:32.095 EXTRA_VAGRANTFILES= 00:01:32.095 NIC_MODEL=virtio 00:01:32.095 00:01:32.095 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:32.095 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:34.004 Bringing machine 'default' up with 'libvirt' provider... 00:01:34.574 ==> default: Creating image (snapshot of base box volume). 00:01:34.834 ==> default: Creating domain with the following settings... 00:01:34.834 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732548513_4314b6dc815c990c5c3e 00:01:34.834 ==> default: -- Domain type: kvm 00:01:34.834 ==> default: -- Cpus: 10 00:01:34.834 ==> default: -- Feature: acpi 00:01:34.834 ==> default: -- Feature: apic 00:01:34.834 ==> default: -- Feature: pae 00:01:34.834 ==> default: -- Memory: 12288M 00:01:34.834 ==> default: -- Memory Backing: hugepages: 00:01:34.834 ==> default: -- Management MAC: 00:01:34.834 ==> default: -- Loader: 00:01:34.834 ==> default: -- Nvram: 00:01:34.834 ==> default: -- Base box: spdk/fedora39 00:01:34.835 ==> default: -- Storage pool: default 00:01:34.835 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732548513_4314b6dc815c990c5c3e.img (20G) 00:01:34.835 ==> default: -- Volume Cache: default 00:01:34.835 ==> default: -- Kernel: 00:01:34.835 ==> default: -- Initrd: 00:01:34.835 ==> default: -- Graphics Type: vnc 00:01:34.835 ==> default: -- Graphics Port: -1 00:01:34.835 ==> default: -- Graphics IP: 127.0.0.1 00:01:34.835 ==> default: -- Graphics Password: Not defined 00:01:34.835 ==> default: -- Video Type: cirrus 00:01:34.835 ==> default: -- Video VRAM: 9216 00:01:34.835 ==> default: -- Sound Type: 00:01:34.835 ==> default: -- Keymap: en-us 00:01:34.835 ==> default: -- TPM Path: 00:01:34.835 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:34.835 ==> default: -- Command line args: 00:01:34.835 ==> default: -> value=-device, 00:01:34.835 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:34.835 ==> default: -> value=-drive, 00:01:34.835 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:34.835 ==> default: -> value=-device, 00:01:34.835 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.835 ==> default: -> value=-device, 00:01:34.835 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:34.835 ==> default: -> value=-drive, 00:01:34.835 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:34.835 ==> default: -> value=-device, 00:01:34.835 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.835 ==> default: -> value=-drive, 00:01:34.835 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:34.835 ==> default: -> value=-device, 00:01:34.835 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.835 ==> default: -> value=-drive, 00:01:34.835 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:34.835 ==> default: -> value=-device, 00:01:34.835 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.095 ==> default: Creating shared folders metadata... 00:01:35.095 ==> default: Starting domain. 00:01:36.037 ==> default: Waiting for domain to get an IP address... 00:01:54.188 ==> default: Waiting for SSH to become available... 00:01:54.188 ==> default: Configuring and enabling network interfaces... 00:01:59.470 default: SSH address: 192.168.121.146:22 00:01:59.470 default: SSH username: vagrant 00:01:59.470 default: SSH auth method: private key 00:02:02.014 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:10.191 ==> default: Mounting SSHFS shared folder... 00:02:12.092 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:12.092 ==> default: Checking Mount.. 00:02:13.478 ==> default: Folder Successfully Mounted! 00:02:13.478 ==> default: Running provisioner: file... 00:02:14.857 default: ~/.gitconfig => .gitconfig 00:02:15.116 00:02:15.116 SUCCESS! 00:02:15.116 00:02:15.116 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:15.116 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:15.116 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:15.116 00:02:15.125 [Pipeline] } 00:02:15.141 [Pipeline] // stage 00:02:15.150 [Pipeline] dir 00:02:15.150 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:15.152 [Pipeline] { 00:02:15.164 [Pipeline] catchError 00:02:15.166 [Pipeline] { 00:02:15.178 [Pipeline] sh 00:02:15.460 + vagrant ssh-config --host vagrant 00:02:15.460 + sed -ne /^Host/,$p 00:02:15.460 + tee ssh_conf 00:02:17.991 Host vagrant 00:02:17.991 HostName 192.168.121.146 00:02:17.991 User vagrant 00:02:17.991 Port 22 00:02:17.991 UserKnownHostsFile /dev/null 00:02:17.991 StrictHostKeyChecking no 00:02:17.991 PasswordAuthentication no 00:02:17.991 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:17.991 IdentitiesOnly yes 00:02:17.991 LogLevel FATAL 00:02:17.991 ForwardAgent yes 00:02:17.991 ForwardX11 yes 00:02:17.991 00:02:18.005 [Pipeline] withEnv 00:02:18.007 [Pipeline] { 00:02:18.021 [Pipeline] sh 00:02:18.302 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:18.302 source /etc/os-release 00:02:18.302 [[ -e /image.version ]] && img=$(< /image.version) 00:02:18.302 # Minimal, systemd-like check. 00:02:18.302 if [[ -e /.dockerenv ]]; then 00:02:18.302 # Clear garbage from the node's name: 00:02:18.302 # agt-er_autotest_547-896 -> autotest_547-896 00:02:18.302 # $HOSTNAME is the actual container id 00:02:18.302 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:18.302 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:18.302 # We can assume this is a mount from a host where container is running, 00:02:18.302 # so fetch its hostname to easily identify the target swarm worker. 00:02:18.302 container="$(< /etc/hostname) ($agent)" 00:02:18.302 else 00:02:18.302 # Fallback 00:02:18.302 container=$agent 00:02:18.302 fi 00:02:18.302 fi 00:02:18.302 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:18.302 00:02:18.572 [Pipeline] } 00:02:18.587 [Pipeline] // withEnv 00:02:18.594 [Pipeline] setCustomBuildProperty 00:02:18.607 [Pipeline] stage 00:02:18.609 [Pipeline] { (Tests) 00:02:18.624 [Pipeline] sh 00:02:18.918 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:19.240 [Pipeline] sh 00:02:19.520 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:19.796 [Pipeline] timeout 00:02:19.796 Timeout set to expire in 1 hr 30 min 00:02:19.798 [Pipeline] { 00:02:19.814 [Pipeline] sh 00:02:20.093 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:20.661 HEAD is now at ff2e6bfe4 lib/lvol: cluster size must be a multiple of bs_dev->blocklen 00:02:20.673 [Pipeline] sh 00:02:20.953 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:21.227 [Pipeline] sh 00:02:21.514 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:21.791 [Pipeline] sh 00:02:22.073 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:22.333 ++ readlink -f spdk_repo 00:02:22.333 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:22.333 + [[ -n /home/vagrant/spdk_repo ]] 00:02:22.333 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:22.333 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:22.333 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:22.333 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:22.333 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:22.333 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:22.333 + cd /home/vagrant/spdk_repo 00:02:22.333 + source /etc/os-release 00:02:22.333 ++ NAME='Fedora Linux' 00:02:22.333 ++ VERSION='39 (Cloud Edition)' 00:02:22.333 ++ ID=fedora 00:02:22.333 ++ VERSION_ID=39 00:02:22.334 ++ VERSION_CODENAME= 00:02:22.334 ++ PLATFORM_ID=platform:f39 00:02:22.334 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:22.334 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:22.334 ++ LOGO=fedora-logo-icon 00:02:22.334 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:22.334 ++ HOME_URL=https://fedoraproject.org/ 00:02:22.334 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:22.334 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:22.334 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:22.334 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:22.334 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:22.334 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:22.334 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:22.334 ++ SUPPORT_END=2024-11-12 00:02:22.334 ++ VARIANT='Cloud Edition' 00:02:22.334 ++ VARIANT_ID=cloud 00:02:22.334 + uname -a 00:02:22.334 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:22.334 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:22.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:22.903 Hugepages 00:02:22.903 node hugesize free / total 00:02:22.903 node0 1048576kB 0 / 0 00:02:22.903 node0 2048kB 0 / 0 00:02:22.903 00:02:22.903 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:22.903 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:22.903 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:22.903 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:22.903 + rm -f /tmp/spdk-ld-path 00:02:22.903 + source autorun-spdk.conf 00:02:22.903 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.903 ++ SPDK_RUN_ASAN=1 00:02:22.903 ++ SPDK_RUN_UBSAN=1 00:02:22.903 ++ SPDK_TEST_RAID=1 00:02:22.903 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:22.903 ++ RUN_NIGHTLY=1 00:02:22.903 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:22.903 + [[ -n '' ]] 00:02:22.903 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:22.903 + for M in /var/spdk/build-*-manifest.txt 00:02:22.903 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:22.903 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:23.161 + for M in /var/spdk/build-*-manifest.txt 00:02:23.161 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:23.161 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:23.161 + for M in /var/spdk/build-*-manifest.txt 00:02:23.161 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:23.161 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:23.161 ++ uname 00:02:23.161 + [[ Linux == \L\i\n\u\x ]] 00:02:23.161 + sudo dmesg -T 00:02:23.161 + sudo dmesg --clear 00:02:23.161 + dmesg_pid=5424 00:02:23.161 + [[ Fedora Linux == FreeBSD ]] 00:02:23.161 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:23.161 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:23.161 + sudo dmesg -Tw 00:02:23.161 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:23.161 + [[ -x /usr/src/fio-static/fio ]] 00:02:23.161 + export FIO_BIN=/usr/src/fio-static/fio 00:02:23.161 + FIO_BIN=/usr/src/fio-static/fio 00:02:23.161 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:23.161 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:23.161 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:23.161 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:23.161 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:23.161 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:23.161 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:23.161 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:23.161 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:23.161 15:29:21 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:23.161 15:29:21 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:23.161 15:29:21 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:23.161 15:29:21 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:23.161 15:29:21 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:23.161 15:29:21 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:23.161 15:29:21 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:23.161 15:29:21 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=1 00:02:23.161 15:29:21 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:23.161 15:29:21 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:23.420 15:29:21 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:23.420 15:29:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:23.420 15:29:21 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:23.420 15:29:21 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:23.420 15:29:21 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:23.420 15:29:21 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:23.420 15:29:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.420 15:29:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.420 15:29:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.420 15:29:21 -- paths/export.sh@5 -- $ export PATH 00:02:23.420 15:29:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.420 15:29:21 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:23.420 15:29:21 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:23.420 15:29:21 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732548561.XXXXXX 00:02:23.420 15:29:21 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732548561.5xtdcV 00:02:23.420 15:29:21 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:23.420 15:29:21 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:23.420 15:29:21 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:23.420 15:29:21 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:23.420 15:29:21 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:23.420 15:29:21 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:23.420 15:29:21 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:23.420 15:29:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.420 15:29:21 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:23.420 15:29:21 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:23.420 15:29:21 -- pm/common@17 -- $ local monitor 00:02:23.420 15:29:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.420 15:29:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.420 15:29:21 -- pm/common@25 -- $ sleep 1 00:02:23.420 15:29:21 -- pm/common@21 -- $ date +%s 00:02:23.420 15:29:21 -- pm/common@21 -- $ date +%s 00:02:23.420 15:29:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732548561 00:02:23.420 15:29:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732548561 00:02:23.420 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732548561_collect-vmstat.pm.log 00:02:23.420 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732548561_collect-cpu-load.pm.log 00:02:24.356 15:29:22 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:24.356 15:29:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:24.356 15:29:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:24.356 15:29:22 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:24.356 15:29:22 -- spdk/autobuild.sh@16 -- $ date -u 00:02:24.356 Mon Nov 25 03:29:22 PM UTC 2024 00:02:24.356 15:29:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:24.356 v25.01-pre-236-gff2e6bfe4 00:02:24.356 15:29:22 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:24.356 15:29:22 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:24.356 15:29:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:24.356 15:29:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:24.356 15:29:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.356 ************************************ 00:02:24.356 START TEST asan 00:02:24.356 ************************************ 00:02:24.356 using asan 00:02:24.356 15:29:22 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:24.356 00:02:24.356 real 0m0.000s 00:02:24.356 user 0m0.000s 00:02:24.356 sys 0m0.000s 00:02:24.356 15:29:22 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:24.356 15:29:22 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:24.356 ************************************ 00:02:24.356 END TEST asan 00:02:24.356 ************************************ 00:02:24.356 15:29:23 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:24.356 15:29:23 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:24.356 15:29:23 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:24.356 15:29:23 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:24.356 15:29:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.615 ************************************ 00:02:24.615 START TEST ubsan 00:02:24.615 ************************************ 00:02:24.615 using ubsan 00:02:24.615 15:29:23 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:24.615 00:02:24.615 real 0m0.000s 00:02:24.615 user 0m0.000s 00:02:24.615 sys 0m0.000s 00:02:24.615 15:29:23 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:24.615 15:29:23 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:24.615 ************************************ 00:02:24.615 END TEST ubsan 00:02:24.615 ************************************ 00:02:24.615 15:29:23 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:24.615 15:29:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:24.615 15:29:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:24.615 15:29:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:24.615 15:29:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:24.615 15:29:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:24.615 15:29:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:24.615 15:29:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:24.615 15:29:23 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:24.615 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:24.615 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:25.182 Using 'verbs' RDMA provider 00:02:41.011 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:55.903 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:56.470 Creating mk/config.mk...done. 00:02:56.470 Creating mk/cc.flags.mk...done. 00:02:56.470 Type 'make' to build. 00:02:56.470 15:29:55 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:56.470 15:29:55 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:56.470 15:29:55 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:56.470 15:29:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:56.470 ************************************ 00:02:56.470 START TEST make 00:02:56.470 ************************************ 00:02:56.470 15:29:55 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:57.037 make[1]: Nothing to be done for 'all'. 00:03:09.273 The Meson build system 00:03:09.273 Version: 1.5.0 00:03:09.273 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:09.273 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:09.273 Build type: native build 00:03:09.273 Program cat found: YES (/usr/bin/cat) 00:03:09.273 Project name: DPDK 00:03:09.273 Project version: 24.03.0 00:03:09.273 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:09.273 C linker for the host machine: cc ld.bfd 2.40-14 00:03:09.273 Host machine cpu family: x86_64 00:03:09.273 Host machine cpu: x86_64 00:03:09.273 Message: ## Building in Developer Mode ## 00:03:09.273 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:09.273 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:09.273 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:09.273 Program python3 found: YES (/usr/bin/python3) 00:03:09.273 Program cat found: YES (/usr/bin/cat) 00:03:09.273 Compiler for C supports arguments -march=native: YES 00:03:09.273 Checking for size of "void *" : 8 00:03:09.273 Checking for size of "void *" : 8 (cached) 00:03:09.273 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:09.273 Library m found: YES 00:03:09.273 Library numa found: YES 00:03:09.273 Has header "numaif.h" : YES 00:03:09.273 Library fdt found: NO 00:03:09.273 Library execinfo found: NO 00:03:09.273 Has header "execinfo.h" : YES 00:03:09.273 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:09.273 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:09.273 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:09.273 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:09.273 Run-time dependency openssl found: YES 3.1.1 00:03:09.273 Run-time dependency libpcap found: YES 1.10.4 00:03:09.273 Has header "pcap.h" with dependency libpcap: YES 00:03:09.273 Compiler for C supports arguments -Wcast-qual: YES 00:03:09.273 Compiler for C supports arguments -Wdeprecated: YES 00:03:09.273 Compiler for C supports arguments -Wformat: YES 00:03:09.273 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:09.273 Compiler for C supports arguments -Wformat-security: NO 00:03:09.273 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:09.273 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:09.273 Compiler for C supports arguments -Wnested-externs: YES 00:03:09.273 Compiler for C supports arguments -Wold-style-definition: YES 00:03:09.273 Compiler for C supports arguments -Wpointer-arith: YES 00:03:09.273 Compiler for C supports arguments -Wsign-compare: YES 00:03:09.273 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:09.273 Compiler for C supports arguments -Wundef: YES 00:03:09.273 Compiler for C supports arguments -Wwrite-strings: YES 00:03:09.273 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:09.273 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:09.273 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:09.273 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:09.273 Program objdump found: YES (/usr/bin/objdump) 00:03:09.273 Compiler for C supports arguments -mavx512f: YES 00:03:09.273 Checking if "AVX512 checking" compiles: YES 00:03:09.273 Fetching value of define "__SSE4_2__" : 1 00:03:09.273 Fetching value of define "__AES__" : 1 00:03:09.273 Fetching value of define "__AVX__" : 1 00:03:09.273 Fetching value of define "__AVX2__" : 1 00:03:09.273 Fetching value of define "__AVX512BW__" : 1 00:03:09.273 Fetching value of define "__AVX512CD__" : 1 00:03:09.273 Fetching value of define "__AVX512DQ__" : 1 00:03:09.273 Fetching value of define "__AVX512F__" : 1 00:03:09.273 Fetching value of define "__AVX512VL__" : 1 00:03:09.273 Fetching value of define "__PCLMUL__" : 1 00:03:09.273 Fetching value of define "__RDRND__" : 1 00:03:09.273 Fetching value of define "__RDSEED__" : 1 00:03:09.273 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:09.273 Fetching value of define "__znver1__" : (undefined) 00:03:09.273 Fetching value of define "__znver2__" : (undefined) 00:03:09.273 Fetching value of define "__znver3__" : (undefined) 00:03:09.273 Fetching value of define "__znver4__" : (undefined) 00:03:09.273 Library asan found: YES 00:03:09.273 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:09.273 Message: lib/log: Defining dependency "log" 00:03:09.273 Message: lib/kvargs: Defining dependency "kvargs" 00:03:09.273 Message: lib/telemetry: Defining dependency "telemetry" 00:03:09.273 Library rt found: YES 00:03:09.273 Checking for function "getentropy" : NO 00:03:09.273 Message: lib/eal: Defining dependency "eal" 00:03:09.273 Message: lib/ring: Defining dependency "ring" 00:03:09.273 Message: lib/rcu: Defining dependency "rcu" 00:03:09.273 Message: lib/mempool: Defining dependency "mempool" 00:03:09.273 Message: lib/mbuf: Defining dependency "mbuf" 00:03:09.273 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:09.273 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:09.273 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:09.273 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:09.273 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:09.273 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:09.273 Compiler for C supports arguments -mpclmul: YES 00:03:09.273 Compiler for C supports arguments -maes: YES 00:03:09.273 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:09.273 Compiler for C supports arguments -mavx512bw: YES 00:03:09.273 Compiler for C supports arguments -mavx512dq: YES 00:03:09.273 Compiler for C supports arguments -mavx512vl: YES 00:03:09.273 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:09.273 Compiler for C supports arguments -mavx2: YES 00:03:09.273 Compiler for C supports arguments -mavx: YES 00:03:09.273 Message: lib/net: Defining dependency "net" 00:03:09.273 Message: lib/meter: Defining dependency "meter" 00:03:09.273 Message: lib/ethdev: Defining dependency "ethdev" 00:03:09.273 Message: lib/pci: Defining dependency "pci" 00:03:09.273 Message: lib/cmdline: Defining dependency "cmdline" 00:03:09.273 Message: lib/hash: Defining dependency "hash" 00:03:09.273 Message: lib/timer: Defining dependency "timer" 00:03:09.273 Message: lib/compressdev: Defining dependency "compressdev" 00:03:09.273 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:09.273 Message: lib/dmadev: Defining dependency "dmadev" 00:03:09.273 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:09.273 Message: lib/power: Defining dependency "power" 00:03:09.273 Message: lib/reorder: Defining dependency "reorder" 00:03:09.273 Message: lib/security: Defining dependency "security" 00:03:09.273 Has header "linux/userfaultfd.h" : YES 00:03:09.273 Has header "linux/vduse.h" : YES 00:03:09.273 Message: lib/vhost: Defining dependency "vhost" 00:03:09.273 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:09.273 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:09.273 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:09.273 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:09.273 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:09.273 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:09.273 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:09.273 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:09.273 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:09.273 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:09.273 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:09.273 Configuring doxy-api-html.conf using configuration 00:03:09.273 Configuring doxy-api-man.conf using configuration 00:03:09.273 Program mandb found: YES (/usr/bin/mandb) 00:03:09.273 Program sphinx-build found: NO 00:03:09.273 Configuring rte_build_config.h using configuration 00:03:09.273 Message: 00:03:09.273 ================= 00:03:09.273 Applications Enabled 00:03:09.273 ================= 00:03:09.273 00:03:09.273 apps: 00:03:09.273 00:03:09.273 00:03:09.274 Message: 00:03:09.274 ================= 00:03:09.274 Libraries Enabled 00:03:09.274 ================= 00:03:09.274 00:03:09.274 libs: 00:03:09.274 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:09.274 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:09.274 cryptodev, dmadev, power, reorder, security, vhost, 00:03:09.274 00:03:09.274 Message: 00:03:09.274 =============== 00:03:09.274 Drivers Enabled 00:03:09.274 =============== 00:03:09.274 00:03:09.274 common: 00:03:09.274 00:03:09.274 bus: 00:03:09.274 pci, vdev, 00:03:09.274 mempool: 00:03:09.274 ring, 00:03:09.274 dma: 00:03:09.274 00:03:09.274 net: 00:03:09.274 00:03:09.274 crypto: 00:03:09.274 00:03:09.274 compress: 00:03:09.274 00:03:09.274 vdpa: 00:03:09.274 00:03:09.274 00:03:09.274 Message: 00:03:09.274 ================= 00:03:09.274 Content Skipped 00:03:09.274 ================= 00:03:09.274 00:03:09.274 apps: 00:03:09.274 dumpcap: explicitly disabled via build config 00:03:09.274 graph: explicitly disabled via build config 00:03:09.274 pdump: explicitly disabled via build config 00:03:09.274 proc-info: explicitly disabled via build config 00:03:09.274 test-acl: explicitly disabled via build config 00:03:09.274 test-bbdev: explicitly disabled via build config 00:03:09.274 test-cmdline: explicitly disabled via build config 00:03:09.274 test-compress-perf: explicitly disabled via build config 00:03:09.274 test-crypto-perf: explicitly disabled via build config 00:03:09.274 test-dma-perf: explicitly disabled via build config 00:03:09.274 test-eventdev: explicitly disabled via build config 00:03:09.274 test-fib: explicitly disabled via build config 00:03:09.274 test-flow-perf: explicitly disabled via build config 00:03:09.274 test-gpudev: explicitly disabled via build config 00:03:09.274 test-mldev: explicitly disabled via build config 00:03:09.274 test-pipeline: explicitly disabled via build config 00:03:09.274 test-pmd: explicitly disabled via build config 00:03:09.274 test-regex: explicitly disabled via build config 00:03:09.274 test-sad: explicitly disabled via build config 00:03:09.274 test-security-perf: explicitly disabled via build config 00:03:09.274 00:03:09.274 libs: 00:03:09.274 argparse: explicitly disabled via build config 00:03:09.274 metrics: explicitly disabled via build config 00:03:09.274 acl: explicitly disabled via build config 00:03:09.274 bbdev: explicitly disabled via build config 00:03:09.274 bitratestats: explicitly disabled via build config 00:03:09.274 bpf: explicitly disabled via build config 00:03:09.274 cfgfile: explicitly disabled via build config 00:03:09.274 distributor: explicitly disabled via build config 00:03:09.274 efd: explicitly disabled via build config 00:03:09.274 eventdev: explicitly disabled via build config 00:03:09.274 dispatcher: explicitly disabled via build config 00:03:09.274 gpudev: explicitly disabled via build config 00:03:09.274 gro: explicitly disabled via build config 00:03:09.274 gso: explicitly disabled via build config 00:03:09.274 ip_frag: explicitly disabled via build config 00:03:09.274 jobstats: explicitly disabled via build config 00:03:09.274 latencystats: explicitly disabled via build config 00:03:09.274 lpm: explicitly disabled via build config 00:03:09.274 member: explicitly disabled via build config 00:03:09.274 pcapng: explicitly disabled via build config 00:03:09.274 rawdev: explicitly disabled via build config 00:03:09.274 regexdev: explicitly disabled via build config 00:03:09.274 mldev: explicitly disabled via build config 00:03:09.274 rib: explicitly disabled via build config 00:03:09.274 sched: explicitly disabled via build config 00:03:09.274 stack: explicitly disabled via build config 00:03:09.274 ipsec: explicitly disabled via build config 00:03:09.274 pdcp: explicitly disabled via build config 00:03:09.274 fib: explicitly disabled via build config 00:03:09.274 port: explicitly disabled via build config 00:03:09.274 pdump: explicitly disabled via build config 00:03:09.274 table: explicitly disabled via build config 00:03:09.274 pipeline: explicitly disabled via build config 00:03:09.274 graph: explicitly disabled via build config 00:03:09.274 node: explicitly disabled via build config 00:03:09.274 00:03:09.274 drivers: 00:03:09.274 common/cpt: not in enabled drivers build config 00:03:09.274 common/dpaax: not in enabled drivers build config 00:03:09.274 common/iavf: not in enabled drivers build config 00:03:09.274 common/idpf: not in enabled drivers build config 00:03:09.274 common/ionic: not in enabled drivers build config 00:03:09.274 common/mvep: not in enabled drivers build config 00:03:09.274 common/octeontx: not in enabled drivers build config 00:03:09.274 bus/auxiliary: not in enabled drivers build config 00:03:09.274 bus/cdx: not in enabled drivers build config 00:03:09.274 bus/dpaa: not in enabled drivers build config 00:03:09.274 bus/fslmc: not in enabled drivers build config 00:03:09.274 bus/ifpga: not in enabled drivers build config 00:03:09.274 bus/platform: not in enabled drivers build config 00:03:09.274 bus/uacce: not in enabled drivers build config 00:03:09.274 bus/vmbus: not in enabled drivers build config 00:03:09.274 common/cnxk: not in enabled drivers build config 00:03:09.274 common/mlx5: not in enabled drivers build config 00:03:09.274 common/nfp: not in enabled drivers build config 00:03:09.274 common/nitrox: not in enabled drivers build config 00:03:09.274 common/qat: not in enabled drivers build config 00:03:09.274 common/sfc_efx: not in enabled drivers build config 00:03:09.274 mempool/bucket: not in enabled drivers build config 00:03:09.274 mempool/cnxk: not in enabled drivers build config 00:03:09.274 mempool/dpaa: not in enabled drivers build config 00:03:09.274 mempool/dpaa2: not in enabled drivers build config 00:03:09.274 mempool/octeontx: not in enabled drivers build config 00:03:09.274 mempool/stack: not in enabled drivers build config 00:03:09.274 dma/cnxk: not in enabled drivers build config 00:03:09.274 dma/dpaa: not in enabled drivers build config 00:03:09.274 dma/dpaa2: not in enabled drivers build config 00:03:09.274 dma/hisilicon: not in enabled drivers build config 00:03:09.274 dma/idxd: not in enabled drivers build config 00:03:09.274 dma/ioat: not in enabled drivers build config 00:03:09.274 dma/skeleton: not in enabled drivers build config 00:03:09.274 net/af_packet: not in enabled drivers build config 00:03:09.274 net/af_xdp: not in enabled drivers build config 00:03:09.274 net/ark: not in enabled drivers build config 00:03:09.274 net/atlantic: not in enabled drivers build config 00:03:09.274 net/avp: not in enabled drivers build config 00:03:09.274 net/axgbe: not in enabled drivers build config 00:03:09.274 net/bnx2x: not in enabled drivers build config 00:03:09.274 net/bnxt: not in enabled drivers build config 00:03:09.274 net/bonding: not in enabled drivers build config 00:03:09.274 net/cnxk: not in enabled drivers build config 00:03:09.274 net/cpfl: not in enabled drivers build config 00:03:09.274 net/cxgbe: not in enabled drivers build config 00:03:09.274 net/dpaa: not in enabled drivers build config 00:03:09.274 net/dpaa2: not in enabled drivers build config 00:03:09.274 net/e1000: not in enabled drivers build config 00:03:09.274 net/ena: not in enabled drivers build config 00:03:09.274 net/enetc: not in enabled drivers build config 00:03:09.274 net/enetfec: not in enabled drivers build config 00:03:09.274 net/enic: not in enabled drivers build config 00:03:09.274 net/failsafe: not in enabled drivers build config 00:03:09.274 net/fm10k: not in enabled drivers build config 00:03:09.274 net/gve: not in enabled drivers build config 00:03:09.274 net/hinic: not in enabled drivers build config 00:03:09.274 net/hns3: not in enabled drivers build config 00:03:09.274 net/i40e: not in enabled drivers build config 00:03:09.274 net/iavf: not in enabled drivers build config 00:03:09.274 net/ice: not in enabled drivers build config 00:03:09.274 net/idpf: not in enabled drivers build config 00:03:09.274 net/igc: not in enabled drivers build config 00:03:09.274 net/ionic: not in enabled drivers build config 00:03:09.274 net/ipn3ke: not in enabled drivers build config 00:03:09.274 net/ixgbe: not in enabled drivers build config 00:03:09.274 net/mana: not in enabled drivers build config 00:03:09.274 net/memif: not in enabled drivers build config 00:03:09.274 net/mlx4: not in enabled drivers build config 00:03:09.274 net/mlx5: not in enabled drivers build config 00:03:09.274 net/mvneta: not in enabled drivers build config 00:03:09.274 net/mvpp2: not in enabled drivers build config 00:03:09.274 net/netvsc: not in enabled drivers build config 00:03:09.274 net/nfb: not in enabled drivers build config 00:03:09.274 net/nfp: not in enabled drivers build config 00:03:09.274 net/ngbe: not in enabled drivers build config 00:03:09.274 net/null: not in enabled drivers build config 00:03:09.274 net/octeontx: not in enabled drivers build config 00:03:09.274 net/octeon_ep: not in enabled drivers build config 00:03:09.274 net/pcap: not in enabled drivers build config 00:03:09.274 net/pfe: not in enabled drivers build config 00:03:09.274 net/qede: not in enabled drivers build config 00:03:09.274 net/ring: not in enabled drivers build config 00:03:09.274 net/sfc: not in enabled drivers build config 00:03:09.274 net/softnic: not in enabled drivers build config 00:03:09.274 net/tap: not in enabled drivers build config 00:03:09.274 net/thunderx: not in enabled drivers build config 00:03:09.274 net/txgbe: not in enabled drivers build config 00:03:09.274 net/vdev_netvsc: not in enabled drivers build config 00:03:09.274 net/vhost: not in enabled drivers build config 00:03:09.274 net/virtio: not in enabled drivers build config 00:03:09.275 net/vmxnet3: not in enabled drivers build config 00:03:09.275 raw/*: missing internal dependency, "rawdev" 00:03:09.275 crypto/armv8: not in enabled drivers build config 00:03:09.275 crypto/bcmfs: not in enabled drivers build config 00:03:09.275 crypto/caam_jr: not in enabled drivers build config 00:03:09.275 crypto/ccp: not in enabled drivers build config 00:03:09.275 crypto/cnxk: not in enabled drivers build config 00:03:09.275 crypto/dpaa_sec: not in enabled drivers build config 00:03:09.275 crypto/dpaa2_sec: not in enabled drivers build config 00:03:09.275 crypto/ipsec_mb: not in enabled drivers build config 00:03:09.275 crypto/mlx5: not in enabled drivers build config 00:03:09.275 crypto/mvsam: not in enabled drivers build config 00:03:09.275 crypto/nitrox: not in enabled drivers build config 00:03:09.275 crypto/null: not in enabled drivers build config 00:03:09.275 crypto/octeontx: not in enabled drivers build config 00:03:09.275 crypto/openssl: not in enabled drivers build config 00:03:09.275 crypto/scheduler: not in enabled drivers build config 00:03:09.275 crypto/uadk: not in enabled drivers build config 00:03:09.275 crypto/virtio: not in enabled drivers build config 00:03:09.275 compress/isal: not in enabled drivers build config 00:03:09.275 compress/mlx5: not in enabled drivers build config 00:03:09.275 compress/nitrox: not in enabled drivers build config 00:03:09.275 compress/octeontx: not in enabled drivers build config 00:03:09.275 compress/zlib: not in enabled drivers build config 00:03:09.275 regex/*: missing internal dependency, "regexdev" 00:03:09.275 ml/*: missing internal dependency, "mldev" 00:03:09.275 vdpa/ifc: not in enabled drivers build config 00:03:09.275 vdpa/mlx5: not in enabled drivers build config 00:03:09.275 vdpa/nfp: not in enabled drivers build config 00:03:09.275 vdpa/sfc: not in enabled drivers build config 00:03:09.275 event/*: missing internal dependency, "eventdev" 00:03:09.275 baseband/*: missing internal dependency, "bbdev" 00:03:09.275 gpu/*: missing internal dependency, "gpudev" 00:03:09.275 00:03:09.275 00:03:09.275 Build targets in project: 85 00:03:09.275 00:03:09.275 DPDK 24.03.0 00:03:09.275 00:03:09.275 User defined options 00:03:09.275 buildtype : debug 00:03:09.275 default_library : shared 00:03:09.275 libdir : lib 00:03:09.275 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:09.275 b_sanitize : address 00:03:09.275 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:09.275 c_link_args : 00:03:09.275 cpu_instruction_set: native 00:03:09.275 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:09.275 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:09.275 enable_docs : false 00:03:09.275 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:09.275 enable_kmods : false 00:03:09.275 max_lcores : 128 00:03:09.275 tests : false 00:03:09.275 00:03:09.275 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:09.275 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:09.275 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:09.275 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:09.275 [3/268] Linking static target lib/librte_kvargs.a 00:03:09.275 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:09.275 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:09.275 [6/268] Linking static target lib/librte_log.a 00:03:09.275 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:09.275 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:09.275 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:09.275 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:09.275 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:09.275 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.275 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:09.275 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:09.275 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:09.275 [16/268] Linking static target lib/librte_telemetry.a 00:03:09.275 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:09.275 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:09.275 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:09.275 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:09.535 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.535 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:09.535 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:09.535 [24/268] Linking target lib/librte_log.so.24.1 00:03:09.535 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:09.535 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:09.535 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:09.795 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:09.795 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:09.795 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:09.795 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.795 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:09.795 [33/268] Linking target lib/librte_kvargs.so.24.1 00:03:09.795 [34/268] Linking target lib/librte_telemetry.so.24.1 00:03:09.795 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:10.054 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:10.054 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:10.054 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:10.054 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:10.054 [40/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:10.054 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:10.054 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:10.054 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:10.054 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:10.054 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:10.313 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:10.313 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:10.573 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:10.573 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:10.573 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:10.573 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:10.573 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:10.831 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:10.831 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:10.831 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:10.831 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:10.831 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:11.091 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:11.091 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:11.091 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:11.091 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:11.091 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:11.350 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:11.350 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:11.350 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:11.350 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:11.350 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:11.609 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:11.609 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:11.609 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:11.868 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:11.868 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:11.868 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:11.868 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:11.868 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:11.868 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:11.868 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:11.868 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:12.127 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:12.127 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:12.127 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:12.127 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:12.386 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:12.386 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:12.386 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:12.386 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:12.386 [87/268] Linking static target lib/librte_eal.a 00:03:12.645 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:12.645 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:12.645 [90/268] Linking static target lib/librte_ring.a 00:03:12.645 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:12.645 [92/268] Linking static target lib/librte_rcu.a 00:03:12.645 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:12.645 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:12.645 [95/268] Linking static target lib/librte_mempool.a 00:03:12.904 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:12.904 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:12.904 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:12.904 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:12.904 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.162 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:13.162 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.162 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:13.421 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:13.421 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:13.421 [106/268] Linking static target lib/librte_mbuf.a 00:03:13.421 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:13.421 [108/268] Linking static target lib/librte_meter.a 00:03:13.421 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:13.680 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:13.680 [111/268] Linking static target lib/librte_net.a 00:03:13.680 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:13.680 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:13.680 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:13.951 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.951 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.951 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:13.951 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.229 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:14.229 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:14.489 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.489 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:14.489 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:14.489 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:14.748 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:14.748 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:14.748 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:14.748 [128/268] Linking static target lib/librte_pci.a 00:03:14.748 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:15.007 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:15.007 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:15.007 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:15.007 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:15.007 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:15.007 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:15.007 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:15.007 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:15.007 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.007 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:15.267 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:15.267 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:15.267 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:15.267 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:15.267 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:15.267 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:15.267 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:15.267 [147/268] Linking static target lib/librte_cmdline.a 00:03:15.527 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:15.787 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:15.787 [150/268] Linking static target lib/librte_timer.a 00:03:15.787 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:15.787 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:15.787 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:16.046 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:16.046 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:16.306 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:16.306 [157/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.306 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:16.566 [159/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:16.566 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:16.566 [161/268] Linking static target lib/librte_compressdev.a 00:03:16.566 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:16.566 [163/268] Linking static target lib/librte_hash.a 00:03:16.566 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:16.826 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:16.826 [166/268] Linking static target lib/librte_dmadev.a 00:03:16.826 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:16.826 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:16.826 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:16.826 [170/268] Linking static target lib/librte_ethdev.a 00:03:16.826 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:16.826 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.087 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:17.348 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:17.348 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:17.348 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:17.348 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:17.607 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.607 [179/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.607 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:17.607 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:17.868 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.868 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:17.868 [184/268] Linking static target lib/librte_cryptodev.a 00:03:17.868 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:17.868 [186/268] Linking static target lib/librte_power.a 00:03:18.127 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:18.127 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:18.127 [189/268] Linking static target lib/librte_reorder.a 00:03:18.127 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:18.127 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:18.127 [192/268] Linking static target lib/librte_security.a 00:03:18.387 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:18.647 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.647 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:18.908 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.908 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.167 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:19.167 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:19.167 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:19.427 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:19.427 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:19.427 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:19.687 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:19.687 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:19.687 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:19.946 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:19.947 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:19.947 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:19.947 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:20.206 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.206 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:20.206 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:20.206 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.206 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:20.206 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.206 [217/268] Linking static target drivers/librte_bus_vdev.a 00:03:20.206 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:20.206 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:20.206 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:20.206 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:20.495 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:20.495 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:20.495 [224/268] Linking static target drivers/librte_mempool_ring.a 00:03:20.495 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:20.495 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.774 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.717 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:23.096 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.096 [230/268] Linking target lib/librte_eal.so.24.1 00:03:23.096 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:23.096 [232/268] Linking target lib/librte_pci.so.24.1 00:03:23.096 [233/268] Linking target lib/librte_meter.so.24.1 00:03:23.096 [234/268] Linking target lib/librte_timer.so.24.1 00:03:23.096 [235/268] Linking target lib/librte_ring.so.24.1 00:03:23.096 [236/268] Linking target lib/librte_dmadev.so.24.1 00:03:23.097 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:23.356 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:23.356 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:23.356 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:23.356 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:23.356 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:23.356 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:23.356 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:23.356 [245/268] Linking target lib/librte_mempool.so.24.1 00:03:23.356 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:23.356 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:23.647 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:23.647 [249/268] Linking target lib/librte_mbuf.so.24.1 00:03:23.647 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:23.647 [251/268] Linking target lib/librte_net.so.24.1 00:03:23.647 [252/268] Linking target lib/librte_reorder.so.24.1 00:03:23.647 [253/268] Linking target lib/librte_compressdev.so.24.1 00:03:23.647 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:23.908 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:23.908 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:23.908 [257/268] Linking target lib/librte_hash.so.24.1 00:03:23.908 [258/268] Linking target lib/librte_cmdline.so.24.1 00:03:23.908 [259/268] Linking target lib/librte_security.so.24.1 00:03:23.908 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:25.812 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.812 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:25.812 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:25.812 [264/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:25.812 [265/268] Linking target lib/librte_power.so.24.1 00:03:25.812 [266/268] Linking static target lib/librte_vhost.a 00:03:28.354 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.354 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:28.354 INFO: autodetecting backend as ninja 00:03:28.354 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:46.482 CC lib/ut/ut.o 00:03:46.482 CC lib/log/log.o 00:03:46.482 CC lib/ut_mock/mock.o 00:03:46.482 CC lib/log/log_flags.o 00:03:46.482 CC lib/log/log_deprecated.o 00:03:46.482 LIB libspdk_log.a 00:03:46.482 LIB libspdk_ut.a 00:03:46.482 LIB libspdk_ut_mock.a 00:03:46.482 SO libspdk_log.so.7.1 00:03:46.482 SO libspdk_ut_mock.so.6.0 00:03:46.482 SO libspdk_ut.so.2.0 00:03:46.482 SYMLINK libspdk_ut.so 00:03:46.482 SYMLINK libspdk_log.so 00:03:46.482 SYMLINK libspdk_ut_mock.so 00:03:46.482 CC lib/ioat/ioat.o 00:03:46.482 CC lib/util/base64.o 00:03:46.482 CC lib/util/bit_array.o 00:03:46.482 CC lib/util/cpuset.o 00:03:46.482 CXX lib/trace_parser/trace.o 00:03:46.482 CC lib/util/crc32.o 00:03:46.482 CC lib/util/crc32c.o 00:03:46.482 CC lib/util/crc16.o 00:03:46.482 CC lib/dma/dma.o 00:03:46.482 CC lib/vfio_user/host/vfio_user_pci.o 00:03:46.482 CC lib/util/crc32_ieee.o 00:03:46.482 CC lib/util/crc64.o 00:03:46.482 CC lib/util/dif.o 00:03:46.482 CC lib/util/fd.o 00:03:46.482 CC lib/util/fd_group.o 00:03:46.482 LIB libspdk_dma.a 00:03:46.482 CC lib/util/file.o 00:03:46.482 SO libspdk_dma.so.5.0 00:03:46.482 LIB libspdk_ioat.a 00:03:46.482 CC lib/vfio_user/host/vfio_user.o 00:03:46.482 SYMLINK libspdk_dma.so 00:03:46.482 CC lib/util/hexlify.o 00:03:46.482 CC lib/util/iov.o 00:03:46.482 SO libspdk_ioat.so.7.0 00:03:46.482 CC lib/util/math.o 00:03:46.482 SYMLINK libspdk_ioat.so 00:03:46.482 CC lib/util/net.o 00:03:46.482 CC lib/util/pipe.o 00:03:46.482 CC lib/util/strerror_tls.o 00:03:46.482 CC lib/util/string.o 00:03:46.482 CC lib/util/uuid.o 00:03:46.482 CC lib/util/xor.o 00:03:46.482 CC lib/util/zipf.o 00:03:46.482 LIB libspdk_vfio_user.a 00:03:46.482 CC lib/util/md5.o 00:03:46.482 SO libspdk_vfio_user.so.5.0 00:03:46.482 SYMLINK libspdk_vfio_user.so 00:03:46.742 LIB libspdk_util.a 00:03:47.001 SO libspdk_util.so.10.1 00:03:47.001 LIB libspdk_trace_parser.a 00:03:47.001 SO libspdk_trace_parser.so.6.0 00:03:47.001 SYMLINK libspdk_util.so 00:03:47.001 SYMLINK libspdk_trace_parser.so 00:03:47.260 CC lib/vmd/led.o 00:03:47.260 CC lib/rdma_utils/rdma_utils.o 00:03:47.260 CC lib/vmd/vmd.o 00:03:47.260 CC lib/json/json_parse.o 00:03:47.260 CC lib/json/json_util.o 00:03:47.260 CC lib/json/json_write.o 00:03:47.260 CC lib/env_dpdk/env.o 00:03:47.260 CC lib/idxd/idxd.o 00:03:47.260 CC lib/idxd/idxd_user.o 00:03:47.260 CC lib/conf/conf.o 00:03:47.260 CC lib/idxd/idxd_kernel.o 00:03:47.518 CC lib/env_dpdk/memory.o 00:03:47.518 CC lib/env_dpdk/pci.o 00:03:47.518 LIB libspdk_conf.a 00:03:47.518 CC lib/env_dpdk/init.o 00:03:47.518 LIB libspdk_rdma_utils.a 00:03:47.518 SO libspdk_conf.so.6.0 00:03:47.518 SO libspdk_rdma_utils.so.1.0 00:03:47.518 LIB libspdk_json.a 00:03:47.518 CC lib/env_dpdk/threads.o 00:03:47.518 SYMLINK libspdk_conf.so 00:03:47.518 CC lib/env_dpdk/pci_ioat.o 00:03:47.518 SO libspdk_json.so.6.0 00:03:47.518 SYMLINK libspdk_rdma_utils.so 00:03:47.518 CC lib/env_dpdk/pci_virtio.o 00:03:47.518 SYMLINK libspdk_json.so 00:03:47.778 CC lib/env_dpdk/pci_vmd.o 00:03:47.778 CC lib/env_dpdk/pci_idxd.o 00:03:47.778 CC lib/env_dpdk/pci_event.o 00:03:47.778 CC lib/rdma_provider/common.o 00:03:47.778 CC lib/env_dpdk/sigbus_handler.o 00:03:47.778 CC lib/env_dpdk/pci_dpdk.o 00:03:47.778 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:47.778 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:48.036 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:48.036 LIB libspdk_vmd.a 00:03:48.036 LIB libspdk_idxd.a 00:03:48.036 SO libspdk_vmd.so.6.0 00:03:48.036 CC lib/jsonrpc/jsonrpc_server.o 00:03:48.036 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:48.036 CC lib/jsonrpc/jsonrpc_client.o 00:03:48.036 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:48.036 SO libspdk_idxd.so.12.1 00:03:48.036 SYMLINK libspdk_vmd.so 00:03:48.036 SYMLINK libspdk_idxd.so 00:03:48.036 LIB libspdk_rdma_provider.a 00:03:48.295 SO libspdk_rdma_provider.so.7.0 00:03:48.295 LIB libspdk_jsonrpc.a 00:03:48.295 SYMLINK libspdk_rdma_provider.so 00:03:48.295 SO libspdk_jsonrpc.so.6.0 00:03:48.295 SYMLINK libspdk_jsonrpc.so 00:03:48.861 CC lib/rpc/rpc.o 00:03:48.861 LIB libspdk_env_dpdk.a 00:03:48.861 SO libspdk_env_dpdk.so.15.1 00:03:49.119 LIB libspdk_rpc.a 00:03:49.119 SO libspdk_rpc.so.6.0 00:03:49.119 SYMLINK libspdk_rpc.so 00:03:49.119 SYMLINK libspdk_env_dpdk.so 00:03:49.378 CC lib/notify/notify.o 00:03:49.378 CC lib/notify/notify_rpc.o 00:03:49.378 CC lib/trace/trace.o 00:03:49.378 CC lib/trace/trace_rpc.o 00:03:49.378 CC lib/trace/trace_flags.o 00:03:49.378 CC lib/keyring/keyring.o 00:03:49.378 CC lib/keyring/keyring_rpc.o 00:03:49.637 LIB libspdk_notify.a 00:03:49.637 SO libspdk_notify.so.6.0 00:03:49.637 SYMLINK libspdk_notify.so 00:03:49.637 LIB libspdk_keyring.a 00:03:49.637 LIB libspdk_trace.a 00:03:49.637 SO libspdk_keyring.so.2.0 00:03:49.895 SO libspdk_trace.so.11.0 00:03:49.895 SYMLINK libspdk_keyring.so 00:03:49.895 SYMLINK libspdk_trace.so 00:03:50.154 CC lib/sock/sock.o 00:03:50.154 CC lib/sock/sock_rpc.o 00:03:50.154 CC lib/thread/iobuf.o 00:03:50.154 CC lib/thread/thread.o 00:03:50.723 LIB libspdk_sock.a 00:03:50.723 SO libspdk_sock.so.10.0 00:03:50.982 SYMLINK libspdk_sock.so 00:03:51.248 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:51.248 CC lib/nvme/nvme_ctrlr.o 00:03:51.248 CC lib/nvme/nvme_pcie_common.o 00:03:51.248 CC lib/nvme/nvme_fabric.o 00:03:51.248 CC lib/nvme/nvme_ns.o 00:03:51.248 CC lib/nvme/nvme_ns_cmd.o 00:03:51.248 CC lib/nvme/nvme_qpair.o 00:03:51.248 CC lib/nvme/nvme_pcie.o 00:03:51.248 CC lib/nvme/nvme.o 00:03:52.197 CC lib/nvme/nvme_quirks.o 00:03:52.197 CC lib/nvme/nvme_transport.o 00:03:52.197 LIB libspdk_thread.a 00:03:52.197 SO libspdk_thread.so.11.0 00:03:52.197 CC lib/nvme/nvme_discovery.o 00:03:52.197 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:52.197 SYMLINK libspdk_thread.so 00:03:52.197 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:52.197 CC lib/nvme/nvme_tcp.o 00:03:52.457 CC lib/nvme/nvme_opal.o 00:03:52.457 CC lib/accel/accel.o 00:03:52.457 CC lib/nvme/nvme_io_msg.o 00:03:52.457 CC lib/nvme/nvme_poll_group.o 00:03:52.718 CC lib/nvme/nvme_zns.o 00:03:52.718 CC lib/accel/accel_rpc.o 00:03:52.718 CC lib/nvme/nvme_stubs.o 00:03:52.978 CC lib/blob/blobstore.o 00:03:52.978 CC lib/blob/request.o 00:03:52.978 CC lib/init/json_config.o 00:03:53.238 CC lib/virtio/virtio.o 00:03:53.238 CC lib/blob/zeroes.o 00:03:53.238 CC lib/blob/blob_bs_dev.o 00:03:53.238 CC lib/init/subsystem.o 00:03:53.238 CC lib/init/subsystem_rpc.o 00:03:53.238 CC lib/fsdev/fsdev.o 00:03:53.497 CC lib/fsdev/fsdev_io.o 00:03:53.497 CC lib/fsdev/fsdev_rpc.o 00:03:53.497 CC lib/init/rpc.o 00:03:53.497 CC lib/nvme/nvme_auth.o 00:03:53.497 CC lib/virtio/virtio_vhost_user.o 00:03:53.497 CC lib/accel/accel_sw.o 00:03:53.497 CC lib/virtio/virtio_vfio_user.o 00:03:53.497 LIB libspdk_init.a 00:03:53.497 SO libspdk_init.so.6.0 00:03:53.757 SYMLINK libspdk_init.so 00:03:53.757 CC lib/virtio/virtio_pci.o 00:03:53.757 CC lib/nvme/nvme_cuse.o 00:03:53.757 CC lib/nvme/nvme_rdma.o 00:03:54.016 LIB libspdk_accel.a 00:03:54.016 SO libspdk_accel.so.16.0 00:03:54.016 LIB libspdk_fsdev.a 00:03:54.016 CC lib/event/app.o 00:03:54.016 CC lib/event/log_rpc.o 00:03:54.016 CC lib/event/reactor.o 00:03:54.016 LIB libspdk_virtio.a 00:03:54.016 SO libspdk_fsdev.so.2.0 00:03:54.016 SYMLINK libspdk_accel.so 00:03:54.016 CC lib/event/app_rpc.o 00:03:54.016 SO libspdk_virtio.so.7.0 00:03:54.016 SYMLINK libspdk_fsdev.so 00:03:54.016 SYMLINK libspdk_virtio.so 00:03:54.275 CC lib/event/scheduler_static.o 00:03:54.275 CC lib/bdev/bdev.o 00:03:54.275 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:54.275 CC lib/bdev/bdev_rpc.o 00:03:54.275 CC lib/bdev/bdev_zone.o 00:03:54.535 CC lib/bdev/part.o 00:03:54.535 CC lib/bdev/scsi_nvme.o 00:03:54.535 LIB libspdk_event.a 00:03:54.535 SO libspdk_event.so.14.0 00:03:54.535 SYMLINK libspdk_event.so 00:03:55.101 LIB libspdk_fuse_dispatcher.a 00:03:55.101 SO libspdk_fuse_dispatcher.so.1.0 00:03:55.101 SYMLINK libspdk_fuse_dispatcher.so 00:03:55.101 LIB libspdk_nvme.a 00:03:55.359 SO libspdk_nvme.so.15.0 00:03:55.617 SYMLINK libspdk_nvme.so 00:03:56.552 LIB libspdk_blob.a 00:03:56.552 SO libspdk_blob.so.12.0 00:03:56.810 SYMLINK libspdk_blob.so 00:03:57.068 LIB libspdk_bdev.a 00:03:57.069 CC lib/blobfs/tree.o 00:03:57.069 CC lib/blobfs/blobfs.o 00:03:57.069 CC lib/lvol/lvol.o 00:03:57.069 SO libspdk_bdev.so.17.0 00:03:57.326 SYMLINK libspdk_bdev.so 00:03:57.326 CC lib/nvmf/ctrlr.o 00:03:57.326 CC lib/nvmf/ctrlr_bdev.o 00:03:57.326 CC lib/nvmf/ctrlr_discovery.o 00:03:57.326 CC lib/nvmf/subsystem.o 00:03:57.326 CC lib/ftl/ftl_core.o 00:03:57.326 CC lib/ublk/ublk.o 00:03:57.326 CC lib/scsi/dev.o 00:03:57.326 CC lib/nbd/nbd.o 00:03:57.590 CC lib/scsi/lun.o 00:03:57.861 CC lib/ftl/ftl_init.o 00:03:57.861 CC lib/nbd/nbd_rpc.o 00:03:57.861 CC lib/nvmf/nvmf.o 00:03:57.861 CC lib/scsi/port.o 00:03:58.120 LIB libspdk_blobfs.a 00:03:58.120 CC lib/ftl/ftl_layout.o 00:03:58.120 SO libspdk_blobfs.so.11.0 00:03:58.120 LIB libspdk_nbd.a 00:03:58.120 SO libspdk_nbd.so.7.0 00:03:58.120 SYMLINK libspdk_blobfs.so 00:03:58.120 CC lib/scsi/scsi.o 00:03:58.120 CC lib/scsi/scsi_bdev.o 00:03:58.120 SYMLINK libspdk_nbd.so 00:03:58.120 CC lib/scsi/scsi_pr.o 00:03:58.120 CC lib/ublk/ublk_rpc.o 00:03:58.120 LIB libspdk_lvol.a 00:03:58.120 CC lib/nvmf/nvmf_rpc.o 00:03:58.120 SO libspdk_lvol.so.11.0 00:03:58.378 CC lib/scsi/scsi_rpc.o 00:03:58.378 SYMLINK libspdk_lvol.so 00:03:58.378 CC lib/scsi/task.o 00:03:58.378 LIB libspdk_ublk.a 00:03:58.378 CC lib/ftl/ftl_debug.o 00:03:58.378 SO libspdk_ublk.so.3.0 00:03:58.378 SYMLINK libspdk_ublk.so 00:03:58.378 CC lib/ftl/ftl_io.o 00:03:58.378 CC lib/ftl/ftl_sb.o 00:03:58.636 CC lib/nvmf/transport.o 00:03:58.636 CC lib/nvmf/tcp.o 00:03:58.636 CC lib/ftl/ftl_l2p.o 00:03:58.636 CC lib/ftl/ftl_l2p_flat.o 00:03:58.636 LIB libspdk_scsi.a 00:03:58.636 SO libspdk_scsi.so.9.0 00:03:58.636 CC lib/nvmf/stubs.o 00:03:58.894 CC lib/nvmf/mdns_server.o 00:03:58.894 SYMLINK libspdk_scsi.so 00:03:58.894 CC lib/nvmf/rdma.o 00:03:58.894 CC lib/ftl/ftl_nv_cache.o 00:03:58.894 CC lib/ftl/ftl_band.o 00:03:58.894 CC lib/iscsi/conn.o 00:03:59.152 CC lib/iscsi/init_grp.o 00:03:59.152 CC lib/iscsi/iscsi.o 00:03:59.152 CC lib/nvmf/auth.o 00:03:59.411 CC lib/ftl/ftl_band_ops.o 00:03:59.411 CC lib/vhost/vhost.o 00:03:59.411 CC lib/iscsi/param.o 00:03:59.411 CC lib/iscsi/portal_grp.o 00:03:59.670 CC lib/iscsi/tgt_node.o 00:03:59.670 CC lib/iscsi/iscsi_subsystem.o 00:03:59.670 CC lib/iscsi/iscsi_rpc.o 00:03:59.670 CC lib/ftl/ftl_writer.o 00:03:59.929 CC lib/iscsi/task.o 00:03:59.929 CC lib/ftl/ftl_rq.o 00:03:59.929 CC lib/ftl/ftl_reloc.o 00:03:59.929 CC lib/ftl/ftl_l2p_cache.o 00:04:00.187 CC lib/ftl/ftl_p2l.o 00:04:00.187 CC lib/ftl/ftl_p2l_log.o 00:04:00.187 CC lib/ftl/mngt/ftl_mngt.o 00:04:00.187 CC lib/vhost/vhost_rpc.o 00:04:00.187 CC lib/vhost/vhost_scsi.o 00:04:00.187 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:00.445 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:00.445 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:00.445 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:00.445 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:00.445 CC lib/vhost/vhost_blk.o 00:04:00.445 CC lib/vhost/rte_vhost_user.o 00:04:00.445 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:00.703 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:00.703 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:00.703 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:00.703 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:00.703 LIB libspdk_iscsi.a 00:04:00.703 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:00.703 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:00.960 SO libspdk_iscsi.so.8.0 00:04:00.960 CC lib/ftl/utils/ftl_conf.o 00:04:00.960 CC lib/ftl/utils/ftl_md.o 00:04:00.960 CC lib/ftl/utils/ftl_mempool.o 00:04:00.960 CC lib/ftl/utils/ftl_bitmap.o 00:04:00.960 SYMLINK libspdk_iscsi.so 00:04:00.960 CC lib/ftl/utils/ftl_property.o 00:04:01.219 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:01.219 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:01.219 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:01.219 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:01.219 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:01.219 LIB libspdk_nvmf.a 00:04:01.219 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:01.219 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:01.219 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:01.478 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:01.478 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:01.478 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:01.478 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:01.478 SO libspdk_nvmf.so.20.0 00:04:01.478 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:01.478 CC lib/ftl/base/ftl_base_dev.o 00:04:01.478 CC lib/ftl/base/ftl_base_bdev.o 00:04:01.478 CC lib/ftl/ftl_trace.o 00:04:01.478 LIB libspdk_vhost.a 00:04:01.736 SO libspdk_vhost.so.8.0 00:04:01.736 SYMLINK libspdk_nvmf.so 00:04:01.736 SYMLINK libspdk_vhost.so 00:04:01.736 LIB libspdk_ftl.a 00:04:01.994 SO libspdk_ftl.so.9.0 00:04:02.253 SYMLINK libspdk_ftl.so 00:04:02.513 CC module/env_dpdk/env_dpdk_rpc.o 00:04:02.773 CC module/keyring/linux/keyring.o 00:04:02.773 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:02.773 CC module/fsdev/aio/fsdev_aio.o 00:04:02.773 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:02.773 CC module/sock/posix/posix.o 00:04:02.773 CC module/keyring/file/keyring.o 00:04:02.773 CC module/scheduler/gscheduler/gscheduler.o 00:04:02.773 CC module/accel/error/accel_error.o 00:04:02.773 CC module/blob/bdev/blob_bdev.o 00:04:02.773 LIB libspdk_env_dpdk_rpc.a 00:04:02.773 SO libspdk_env_dpdk_rpc.so.6.0 00:04:02.773 SYMLINK libspdk_env_dpdk_rpc.so 00:04:02.773 CC module/keyring/linux/keyring_rpc.o 00:04:02.773 CC module/accel/error/accel_error_rpc.o 00:04:02.773 CC module/keyring/file/keyring_rpc.o 00:04:02.773 LIB libspdk_scheduler_dpdk_governor.a 00:04:02.773 LIB libspdk_scheduler_gscheduler.a 00:04:02.773 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:02.773 SO libspdk_scheduler_gscheduler.so.4.0 00:04:02.773 LIB libspdk_scheduler_dynamic.a 00:04:02.773 SO libspdk_scheduler_dynamic.so.4.0 00:04:03.033 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:03.033 SYMLINK libspdk_scheduler_gscheduler.so 00:04:03.033 LIB libspdk_keyring_linux.a 00:04:03.033 LIB libspdk_keyring_file.a 00:04:03.033 LIB libspdk_accel_error.a 00:04:03.033 SYMLINK libspdk_scheduler_dynamic.so 00:04:03.033 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:03.033 SO libspdk_keyring_linux.so.1.0 00:04:03.033 SO libspdk_keyring_file.so.2.0 00:04:03.033 SO libspdk_accel_error.so.2.0 00:04:03.033 LIB libspdk_blob_bdev.a 00:04:03.033 SO libspdk_blob_bdev.so.12.0 00:04:03.033 CC module/accel/ioat/accel_ioat.o 00:04:03.033 SYMLINK libspdk_keyring_file.so 00:04:03.033 SYMLINK libspdk_keyring_linux.so 00:04:03.033 CC module/accel/dsa/accel_dsa.o 00:04:03.034 CC module/accel/ioat/accel_ioat_rpc.o 00:04:03.034 SYMLINK libspdk_accel_error.so 00:04:03.034 SYMLINK libspdk_blob_bdev.so 00:04:03.034 CC module/accel/dsa/accel_dsa_rpc.o 00:04:03.034 CC module/fsdev/aio/linux_aio_mgr.o 00:04:03.034 CC module/accel/iaa/accel_iaa.o 00:04:03.034 CC module/accel/iaa/accel_iaa_rpc.o 00:04:03.293 LIB libspdk_accel_ioat.a 00:04:03.294 SO libspdk_accel_ioat.so.6.0 00:04:03.294 CC module/bdev/delay/vbdev_delay.o 00:04:03.294 LIB libspdk_accel_iaa.a 00:04:03.294 SYMLINK libspdk_accel_ioat.so 00:04:03.294 CC module/bdev/error/vbdev_error.o 00:04:03.294 SO libspdk_accel_iaa.so.3.0 00:04:03.294 LIB libspdk_accel_dsa.a 00:04:03.294 SO libspdk_accel_dsa.so.5.0 00:04:03.294 CC module/bdev/gpt/gpt.o 00:04:03.294 SYMLINK libspdk_accel_iaa.so 00:04:03.294 LIB libspdk_fsdev_aio.a 00:04:03.294 CC module/bdev/lvol/vbdev_lvol.o 00:04:03.294 CC module/blobfs/bdev/blobfs_bdev.o 00:04:03.553 SYMLINK libspdk_accel_dsa.so 00:04:03.553 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:03.553 SO libspdk_fsdev_aio.so.1.0 00:04:03.553 CC module/bdev/malloc/bdev_malloc.o 00:04:03.553 LIB libspdk_sock_posix.a 00:04:03.553 SO libspdk_sock_posix.so.6.0 00:04:03.553 SYMLINK libspdk_fsdev_aio.so 00:04:03.553 CC module/bdev/null/bdev_null.o 00:04:03.554 CC module/bdev/gpt/vbdev_gpt.o 00:04:03.554 CC module/bdev/error/vbdev_error_rpc.o 00:04:03.554 SYMLINK libspdk_sock_posix.so 00:04:03.554 LIB libspdk_blobfs_bdev.a 00:04:03.554 SO libspdk_blobfs_bdev.so.6.0 00:04:03.554 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:03.814 CC module/bdev/nvme/bdev_nvme.o 00:04:03.814 SYMLINK libspdk_blobfs_bdev.so 00:04:03.814 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:03.814 CC module/bdev/passthru/vbdev_passthru.o 00:04:03.814 LIB libspdk_bdev_error.a 00:04:03.814 CC module/bdev/raid/bdev_raid.o 00:04:03.814 SO libspdk_bdev_error.so.6.0 00:04:03.814 CC module/bdev/null/bdev_null_rpc.o 00:04:03.814 LIB libspdk_bdev_delay.a 00:04:03.814 LIB libspdk_bdev_gpt.a 00:04:03.814 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:03.814 SO libspdk_bdev_delay.so.6.0 00:04:03.814 SYMLINK libspdk_bdev_error.so 00:04:03.814 SO libspdk_bdev_gpt.so.6.0 00:04:04.074 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:04.074 SYMLINK libspdk_bdev_delay.so 00:04:04.074 SYMLINK libspdk_bdev_gpt.so 00:04:04.074 CC module/bdev/nvme/nvme_rpc.o 00:04:04.074 LIB libspdk_bdev_null.a 00:04:04.074 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:04.074 SO libspdk_bdev_null.so.6.0 00:04:04.074 LIB libspdk_bdev_malloc.a 00:04:04.074 SO libspdk_bdev_malloc.so.6.0 00:04:04.074 CC module/bdev/split/vbdev_split.o 00:04:04.074 SYMLINK libspdk_bdev_null.so 00:04:04.074 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:04.074 SYMLINK libspdk_bdev_malloc.so 00:04:04.074 CC module/bdev/raid/bdev_raid_rpc.o 00:04:04.074 LIB libspdk_bdev_passthru.a 00:04:04.334 CC module/bdev/nvme/bdev_mdns_client.o 00:04:04.334 SO libspdk_bdev_passthru.so.6.0 00:04:04.334 CC module/bdev/aio/bdev_aio.o 00:04:04.334 SYMLINK libspdk_bdev_passthru.so 00:04:04.334 CC module/bdev/nvme/vbdev_opal.o 00:04:04.334 CC module/bdev/split/vbdev_split_rpc.o 00:04:04.334 LIB libspdk_bdev_lvol.a 00:04:04.334 SO libspdk_bdev_lvol.so.6.0 00:04:04.334 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:04.334 CC module/bdev/raid/bdev_raid_sb.o 00:04:04.334 CC module/bdev/raid/raid0.o 00:04:04.334 SYMLINK libspdk_bdev_lvol.so 00:04:04.334 CC module/bdev/raid/raid1.o 00:04:04.334 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:04.334 LIB libspdk_bdev_split.a 00:04:04.594 SO libspdk_bdev_split.so.6.0 00:04:04.594 SYMLINK libspdk_bdev_split.so 00:04:04.594 CC module/bdev/raid/concat.o 00:04:04.594 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:04.594 LIB libspdk_bdev_zone_block.a 00:04:04.594 SO libspdk_bdev_zone_block.so.6.0 00:04:04.594 CC module/bdev/aio/bdev_aio_rpc.o 00:04:04.594 CC module/bdev/ftl/bdev_ftl.o 00:04:04.594 CC module/bdev/raid/raid5f.o 00:04:04.594 SYMLINK libspdk_bdev_zone_block.so 00:04:04.594 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:04.853 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:04.853 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:04.853 CC module/bdev/iscsi/bdev_iscsi.o 00:04:04.853 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:04.853 LIB libspdk_bdev_aio.a 00:04:04.853 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:04.853 SO libspdk_bdev_aio.so.6.0 00:04:04.853 SYMLINK libspdk_bdev_aio.so 00:04:04.853 LIB libspdk_bdev_ftl.a 00:04:05.118 SO libspdk_bdev_ftl.so.6.0 00:04:05.118 SYMLINK libspdk_bdev_ftl.so 00:04:05.118 LIB libspdk_bdev_iscsi.a 00:04:05.118 SO libspdk_bdev_iscsi.so.6.0 00:04:05.118 LIB libspdk_bdev_raid.a 00:04:05.118 SYMLINK libspdk_bdev_iscsi.so 00:04:05.389 SO libspdk_bdev_raid.so.6.0 00:04:05.389 LIB libspdk_bdev_virtio.a 00:04:05.389 SO libspdk_bdev_virtio.so.6.0 00:04:05.389 SYMLINK libspdk_bdev_raid.so 00:04:05.389 SYMLINK libspdk_bdev_virtio.so 00:04:06.334 LIB libspdk_bdev_nvme.a 00:04:06.334 SO libspdk_bdev_nvme.so.7.1 00:04:06.593 SYMLINK libspdk_bdev_nvme.so 00:04:07.161 CC module/event/subsystems/iobuf/iobuf.o 00:04:07.161 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:07.161 CC module/event/subsystems/fsdev/fsdev.o 00:04:07.161 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:07.161 CC module/event/subsystems/vmd/vmd.o 00:04:07.161 CC module/event/subsystems/sock/sock.o 00:04:07.161 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:07.161 CC module/event/subsystems/keyring/keyring.o 00:04:07.161 CC module/event/subsystems/scheduler/scheduler.o 00:04:07.161 LIB libspdk_event_fsdev.a 00:04:07.161 LIB libspdk_event_keyring.a 00:04:07.161 LIB libspdk_event_sock.a 00:04:07.161 LIB libspdk_event_vmd.a 00:04:07.161 LIB libspdk_event_iobuf.a 00:04:07.161 LIB libspdk_event_scheduler.a 00:04:07.161 LIB libspdk_event_vhost_blk.a 00:04:07.161 SO libspdk_event_fsdev.so.1.0 00:04:07.161 SO libspdk_event_keyring.so.1.0 00:04:07.161 SO libspdk_event_sock.so.5.0 00:04:07.161 SO libspdk_event_scheduler.so.4.0 00:04:07.161 SO libspdk_event_vhost_blk.so.3.0 00:04:07.161 SO libspdk_event_vmd.so.6.0 00:04:07.161 SO libspdk_event_iobuf.so.3.0 00:04:07.419 SYMLINK libspdk_event_fsdev.so 00:04:07.419 SYMLINK libspdk_event_sock.so 00:04:07.419 SYMLINK libspdk_event_keyring.so 00:04:07.419 SYMLINK libspdk_event_vhost_blk.so 00:04:07.419 SYMLINK libspdk_event_scheduler.so 00:04:07.419 SYMLINK libspdk_event_vmd.so 00:04:07.419 SYMLINK libspdk_event_iobuf.so 00:04:07.676 CC module/event/subsystems/accel/accel.o 00:04:07.934 LIB libspdk_event_accel.a 00:04:07.934 SO libspdk_event_accel.so.6.0 00:04:07.934 SYMLINK libspdk_event_accel.so 00:04:08.193 CC module/event/subsystems/bdev/bdev.o 00:04:08.451 LIB libspdk_event_bdev.a 00:04:08.451 SO libspdk_event_bdev.so.6.0 00:04:08.709 SYMLINK libspdk_event_bdev.so 00:04:08.967 CC module/event/subsystems/scsi/scsi.o 00:04:08.967 CC module/event/subsystems/ublk/ublk.o 00:04:08.967 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:08.967 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:08.967 CC module/event/subsystems/nbd/nbd.o 00:04:08.967 LIB libspdk_event_ublk.a 00:04:08.967 LIB libspdk_event_nbd.a 00:04:08.967 LIB libspdk_event_scsi.a 00:04:08.967 SO libspdk_event_nbd.so.6.0 00:04:08.967 SO libspdk_event_ublk.so.3.0 00:04:08.967 SO libspdk_event_scsi.so.6.0 00:04:09.224 SYMLINK libspdk_event_ublk.so 00:04:09.224 SYMLINK libspdk_event_scsi.so 00:04:09.224 SYMLINK libspdk_event_nbd.so 00:04:09.224 LIB libspdk_event_nvmf.a 00:04:09.224 SO libspdk_event_nvmf.so.6.0 00:04:09.224 SYMLINK libspdk_event_nvmf.so 00:04:09.483 CC module/event/subsystems/iscsi/iscsi.o 00:04:09.483 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:09.483 LIB libspdk_event_iscsi.a 00:04:09.483 LIB libspdk_event_vhost_scsi.a 00:04:09.742 SO libspdk_event_iscsi.so.6.0 00:04:09.742 SO libspdk_event_vhost_scsi.so.3.0 00:04:09.742 SYMLINK libspdk_event_iscsi.so 00:04:09.742 SYMLINK libspdk_event_vhost_scsi.so 00:04:10.001 SO libspdk.so.6.0 00:04:10.001 SYMLINK libspdk.so 00:04:10.260 CC app/spdk_nvme_identify/identify.o 00:04:10.260 CXX app/trace/trace.o 00:04:10.260 CC app/spdk_lspci/spdk_lspci.o 00:04:10.260 CC app/spdk_nvme_perf/perf.o 00:04:10.260 CC app/trace_record/trace_record.o 00:04:10.260 CC app/iscsi_tgt/iscsi_tgt.o 00:04:10.260 CC app/nvmf_tgt/nvmf_main.o 00:04:10.260 CC app/spdk_tgt/spdk_tgt.o 00:04:10.260 CC examples/util/zipf/zipf.o 00:04:10.260 CC test/thread/poller_perf/poller_perf.o 00:04:10.260 LINK spdk_lspci 00:04:10.519 LINK nvmf_tgt 00:04:10.519 LINK zipf 00:04:10.519 LINK poller_perf 00:04:10.519 LINK spdk_tgt 00:04:10.519 LINK iscsi_tgt 00:04:10.519 LINK spdk_trace_record 00:04:10.519 CC app/spdk_nvme_discover/discovery_aer.o 00:04:10.519 LINK spdk_trace 00:04:10.776 CC app/spdk_top/spdk_top.o 00:04:10.776 CC examples/ioat/perf/perf.o 00:04:10.776 CC examples/ioat/verify/verify.o 00:04:10.776 CC test/dma/test_dma/test_dma.o 00:04:10.776 CC examples/vmd/lsvmd/lsvmd.o 00:04:10.776 LINK spdk_nvme_discover 00:04:10.776 CC examples/idxd/perf/perf.o 00:04:11.032 CC app/spdk_dd/spdk_dd.o 00:04:11.032 LINK lsvmd 00:04:11.032 LINK verify 00:04:11.032 LINK ioat_perf 00:04:11.032 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:11.032 LINK spdk_nvme_perf 00:04:11.292 LINK spdk_nvme_identify 00:04:11.292 LINK idxd_perf 00:04:11.292 TEST_HEADER include/spdk/accel.h 00:04:11.292 CC examples/vmd/led/led.o 00:04:11.292 TEST_HEADER include/spdk/accel_module.h 00:04:11.292 TEST_HEADER include/spdk/assert.h 00:04:11.292 TEST_HEADER include/spdk/barrier.h 00:04:11.292 TEST_HEADER include/spdk/base64.h 00:04:11.292 TEST_HEADER include/spdk/bdev.h 00:04:11.292 TEST_HEADER include/spdk/bdev_module.h 00:04:11.292 TEST_HEADER include/spdk/bdev_zone.h 00:04:11.292 TEST_HEADER include/spdk/bit_array.h 00:04:11.292 TEST_HEADER include/spdk/bit_pool.h 00:04:11.292 TEST_HEADER include/spdk/blob_bdev.h 00:04:11.292 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:11.292 TEST_HEADER include/spdk/blobfs.h 00:04:11.292 TEST_HEADER include/spdk/blob.h 00:04:11.292 TEST_HEADER include/spdk/conf.h 00:04:11.292 TEST_HEADER include/spdk/config.h 00:04:11.292 TEST_HEADER include/spdk/cpuset.h 00:04:11.292 TEST_HEADER include/spdk/crc16.h 00:04:11.292 TEST_HEADER include/spdk/crc32.h 00:04:11.292 TEST_HEADER include/spdk/crc64.h 00:04:11.292 TEST_HEADER include/spdk/dif.h 00:04:11.292 TEST_HEADER include/spdk/dma.h 00:04:11.292 TEST_HEADER include/spdk/endian.h 00:04:11.292 TEST_HEADER include/spdk/env_dpdk.h 00:04:11.292 TEST_HEADER include/spdk/env.h 00:04:11.292 TEST_HEADER include/spdk/event.h 00:04:11.292 TEST_HEADER include/spdk/fd_group.h 00:04:11.292 TEST_HEADER include/spdk/fd.h 00:04:11.292 TEST_HEADER include/spdk/file.h 00:04:11.292 TEST_HEADER include/spdk/fsdev.h 00:04:11.292 TEST_HEADER include/spdk/fsdev_module.h 00:04:11.292 TEST_HEADER include/spdk/ftl.h 00:04:11.292 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:11.292 TEST_HEADER include/spdk/gpt_spec.h 00:04:11.292 TEST_HEADER include/spdk/hexlify.h 00:04:11.292 LINK spdk_dd 00:04:11.292 TEST_HEADER include/spdk/histogram_data.h 00:04:11.292 TEST_HEADER include/spdk/idxd.h 00:04:11.292 TEST_HEADER include/spdk/idxd_spec.h 00:04:11.292 LINK interrupt_tgt 00:04:11.292 TEST_HEADER include/spdk/init.h 00:04:11.292 TEST_HEADER include/spdk/ioat.h 00:04:11.292 TEST_HEADER include/spdk/ioat_spec.h 00:04:11.292 TEST_HEADER include/spdk/iscsi_spec.h 00:04:11.292 TEST_HEADER include/spdk/json.h 00:04:11.292 TEST_HEADER include/spdk/jsonrpc.h 00:04:11.292 TEST_HEADER include/spdk/keyring.h 00:04:11.292 TEST_HEADER include/spdk/keyring_module.h 00:04:11.292 TEST_HEADER include/spdk/likely.h 00:04:11.292 TEST_HEADER include/spdk/log.h 00:04:11.292 TEST_HEADER include/spdk/lvol.h 00:04:11.292 TEST_HEADER include/spdk/md5.h 00:04:11.292 TEST_HEADER include/spdk/memory.h 00:04:11.292 LINK test_dma 00:04:11.292 TEST_HEADER include/spdk/mmio.h 00:04:11.292 TEST_HEADER include/spdk/nbd.h 00:04:11.292 TEST_HEADER include/spdk/net.h 00:04:11.292 TEST_HEADER include/spdk/notify.h 00:04:11.292 TEST_HEADER include/spdk/nvme.h 00:04:11.292 TEST_HEADER include/spdk/nvme_intel.h 00:04:11.292 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:11.292 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:11.292 TEST_HEADER include/spdk/nvme_spec.h 00:04:11.292 TEST_HEADER include/spdk/nvme_zns.h 00:04:11.292 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:11.292 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:11.292 TEST_HEADER include/spdk/nvmf.h 00:04:11.292 TEST_HEADER include/spdk/nvmf_spec.h 00:04:11.292 TEST_HEADER include/spdk/nvmf_transport.h 00:04:11.292 CC test/app/bdev_svc/bdev_svc.o 00:04:11.292 TEST_HEADER include/spdk/opal.h 00:04:11.292 TEST_HEADER include/spdk/opal_spec.h 00:04:11.292 TEST_HEADER include/spdk/pci_ids.h 00:04:11.292 TEST_HEADER include/spdk/pipe.h 00:04:11.292 TEST_HEADER include/spdk/queue.h 00:04:11.292 TEST_HEADER include/spdk/reduce.h 00:04:11.292 TEST_HEADER include/spdk/rpc.h 00:04:11.292 TEST_HEADER include/spdk/scheduler.h 00:04:11.292 TEST_HEADER include/spdk/scsi.h 00:04:11.292 TEST_HEADER include/spdk/scsi_spec.h 00:04:11.292 TEST_HEADER include/spdk/sock.h 00:04:11.292 TEST_HEADER include/spdk/stdinc.h 00:04:11.292 TEST_HEADER include/spdk/string.h 00:04:11.292 TEST_HEADER include/spdk/thread.h 00:04:11.292 TEST_HEADER include/spdk/trace.h 00:04:11.292 TEST_HEADER include/spdk/trace_parser.h 00:04:11.292 TEST_HEADER include/spdk/tree.h 00:04:11.292 TEST_HEADER include/spdk/ublk.h 00:04:11.292 TEST_HEADER include/spdk/util.h 00:04:11.292 TEST_HEADER include/spdk/uuid.h 00:04:11.292 LINK led 00:04:11.292 TEST_HEADER include/spdk/version.h 00:04:11.292 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:11.292 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:11.292 TEST_HEADER include/spdk/vhost.h 00:04:11.292 TEST_HEADER include/spdk/vmd.h 00:04:11.292 TEST_HEADER include/spdk/xor.h 00:04:11.292 TEST_HEADER include/spdk/zipf.h 00:04:11.292 CXX test/cpp_headers/accel.o 00:04:11.551 CXX test/cpp_headers/accel_module.o 00:04:11.551 CC app/fio/nvme/fio_plugin.o 00:04:11.551 LINK bdev_svc 00:04:11.551 CXX test/cpp_headers/assert.o 00:04:11.551 CC test/event/event_perf/event_perf.o 00:04:11.551 CC test/env/mem_callbacks/mem_callbacks.o 00:04:11.551 CC app/fio/bdev/fio_plugin.o 00:04:11.551 CC test/env/vtophys/vtophys.o 00:04:11.551 CXX test/cpp_headers/barrier.o 00:04:11.551 LINK event_perf 00:04:11.810 CXX test/cpp_headers/base64.o 00:04:11.810 CC examples/thread/thread/thread_ex.o 00:04:11.810 LINK spdk_top 00:04:11.810 LINK vtophys 00:04:11.810 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:11.810 CXX test/cpp_headers/bdev.o 00:04:11.810 CC test/event/reactor/reactor.o 00:04:11.810 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:12.069 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:12.069 LINK thread 00:04:12.069 LINK reactor 00:04:12.069 LINK mem_callbacks 00:04:12.069 CXX test/cpp_headers/bdev_module.o 00:04:12.069 CC examples/sock/hello_world/hello_sock.o 00:04:12.069 LINK spdk_nvme 00:04:12.069 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:12.069 LINK spdk_bdev 00:04:12.328 CC test/event/reactor_perf/reactor_perf.o 00:04:12.328 LINK nvme_fuzz 00:04:12.328 CXX test/cpp_headers/bdev_zone.o 00:04:12.328 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:12.328 CC test/event/scheduler/scheduler.o 00:04:12.328 CC test/event/app_repeat/app_repeat.o 00:04:12.328 CC app/vhost/vhost.o 00:04:12.328 LINK hello_sock 00:04:12.328 LINK reactor_perf 00:04:12.328 CXX test/cpp_headers/bit_array.o 00:04:12.328 LINK env_dpdk_post_init 00:04:12.328 LINK app_repeat 00:04:12.587 CXX test/cpp_headers/bit_pool.o 00:04:12.587 LINK scheduler 00:04:12.587 CXX test/cpp_headers/blob_bdev.o 00:04:12.587 LINK vhost_fuzz 00:04:12.587 LINK vhost 00:04:12.587 CC examples/accel/perf/accel_perf.o 00:04:12.587 CXX test/cpp_headers/blobfs_bdev.o 00:04:12.587 CXX test/cpp_headers/blobfs.o 00:04:12.587 CC test/env/memory/memory_ut.o 00:04:12.587 CXX test/cpp_headers/blob.o 00:04:12.846 CC test/rpc_client/rpc_client_test.o 00:04:12.846 CC test/app/histogram_perf/histogram_perf.o 00:04:12.846 CC test/app/jsoncat/jsoncat.o 00:04:12.846 CC test/app/stub/stub.o 00:04:12.846 CC test/env/pci/pci_ut.o 00:04:12.846 CXX test/cpp_headers/conf.o 00:04:12.846 LINK histogram_perf 00:04:12.846 CC test/accel/dif/dif.o 00:04:12.846 LINK jsoncat 00:04:12.846 LINK rpc_client_test 00:04:12.846 LINK stub 00:04:13.104 CXX test/cpp_headers/config.o 00:04:13.104 CXX test/cpp_headers/cpuset.o 00:04:13.105 CXX test/cpp_headers/crc16.o 00:04:13.105 LINK accel_perf 00:04:13.105 CC examples/blob/hello_world/hello_blob.o 00:04:13.105 CXX test/cpp_headers/crc32.o 00:04:13.105 LINK pci_ut 00:04:13.105 CC test/blobfs/mkfs/mkfs.o 00:04:13.105 CC examples/blob/cli/blobcli.o 00:04:13.363 CC examples/nvme/hello_world/hello_world.o 00:04:13.363 CC examples/nvme/reconnect/reconnect.o 00:04:13.363 CXX test/cpp_headers/crc64.o 00:04:13.363 LINK mkfs 00:04:13.363 LINK hello_blob 00:04:13.625 CXX test/cpp_headers/dif.o 00:04:13.625 LINK hello_world 00:04:13.625 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:13.625 LINK reconnect 00:04:13.625 LINK dif 00:04:13.625 CXX test/cpp_headers/dma.o 00:04:13.625 LINK iscsi_fuzz 00:04:13.625 LINK blobcli 00:04:13.625 CC examples/nvme/arbitration/arbitration.o 00:04:13.900 CC test/nvme/aer/aer.o 00:04:13.900 CXX test/cpp_headers/endian.o 00:04:13.900 LINK memory_ut 00:04:13.900 CC test/lvol/esnap/esnap.o 00:04:13.900 CC examples/nvme/hotplug/hotplug.o 00:04:13.900 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:13.900 CXX test/cpp_headers/env_dpdk.o 00:04:13.900 CC test/nvme/reset/reset.o 00:04:14.158 LINK nvme_manage 00:04:14.158 LINK aer 00:04:14.158 LINK arbitration 00:04:14.158 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:14.158 CXX test/cpp_headers/env.o 00:04:14.158 LINK cmb_copy 00:04:14.158 LINK hotplug 00:04:14.158 CC examples/bdev/hello_world/hello_bdev.o 00:04:14.158 LINK reset 00:04:14.158 CXX test/cpp_headers/event.o 00:04:14.158 CXX test/cpp_headers/fd_group.o 00:04:14.417 CC examples/nvme/abort/abort.o 00:04:14.417 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:14.417 CC examples/bdev/bdevperf/bdevperf.o 00:04:14.417 LINK hello_fsdev 00:04:14.417 CC test/bdev/bdevio/bdevio.o 00:04:14.417 LINK hello_bdev 00:04:14.417 CXX test/cpp_headers/fd.o 00:04:14.417 CXX test/cpp_headers/file.o 00:04:14.417 CC test/nvme/sgl/sgl.o 00:04:14.417 LINK pmr_persistence 00:04:14.676 CXX test/cpp_headers/fsdev.o 00:04:14.676 CXX test/cpp_headers/fsdev_module.o 00:04:14.676 CXX test/cpp_headers/ftl.o 00:04:14.676 CXX test/cpp_headers/fuse_dispatcher.o 00:04:14.676 CXX test/cpp_headers/gpt_spec.o 00:04:14.676 LINK abort 00:04:14.676 CXX test/cpp_headers/hexlify.o 00:04:14.676 LINK sgl 00:04:14.676 CC test/nvme/e2edp/nvme_dp.o 00:04:14.676 CXX test/cpp_headers/histogram_data.o 00:04:14.935 CC test/nvme/overhead/overhead.o 00:04:14.935 LINK bdevio 00:04:14.935 CC test/nvme/err_injection/err_injection.o 00:04:14.935 CC test/nvme/startup/startup.o 00:04:14.935 CXX test/cpp_headers/idxd.o 00:04:14.935 CC test/nvme/reserve/reserve.o 00:04:14.935 CC test/nvme/simple_copy/simple_copy.o 00:04:14.935 CXX test/cpp_headers/idxd_spec.o 00:04:14.935 LINK err_injection 00:04:15.196 LINK nvme_dp 00:04:15.196 CXX test/cpp_headers/init.o 00:04:15.196 LINK startup 00:04:15.196 LINK overhead 00:04:15.196 LINK reserve 00:04:15.196 CXX test/cpp_headers/ioat.o 00:04:15.196 LINK bdevperf 00:04:15.196 CXX test/cpp_headers/ioat_spec.o 00:04:15.196 CC test/nvme/connect_stress/connect_stress.o 00:04:15.196 LINK simple_copy 00:04:15.196 CC test/nvme/boot_partition/boot_partition.o 00:04:15.456 CC test/nvme/compliance/nvme_compliance.o 00:04:15.456 CC test/nvme/fused_ordering/fused_ordering.o 00:04:15.456 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:15.456 CXX test/cpp_headers/iscsi_spec.o 00:04:15.456 CXX test/cpp_headers/json.o 00:04:15.456 LINK connect_stress 00:04:15.456 CC test/nvme/fdp/fdp.o 00:04:15.456 LINK boot_partition 00:04:15.456 LINK doorbell_aers 00:04:15.456 LINK fused_ordering 00:04:15.456 CXX test/cpp_headers/jsonrpc.o 00:04:15.716 CXX test/cpp_headers/keyring.o 00:04:15.716 CC test/nvme/cuse/cuse.o 00:04:15.716 CXX test/cpp_headers/keyring_module.o 00:04:15.716 CC examples/nvmf/nvmf/nvmf.o 00:04:15.716 LINK nvme_compliance 00:04:15.716 CXX test/cpp_headers/likely.o 00:04:15.716 CXX test/cpp_headers/log.o 00:04:15.716 CXX test/cpp_headers/lvol.o 00:04:15.716 CXX test/cpp_headers/md5.o 00:04:15.716 CXX test/cpp_headers/memory.o 00:04:15.716 LINK fdp 00:04:15.716 CXX test/cpp_headers/mmio.o 00:04:15.976 CXX test/cpp_headers/nbd.o 00:04:15.976 CXX test/cpp_headers/net.o 00:04:15.976 CXX test/cpp_headers/notify.o 00:04:15.976 CXX test/cpp_headers/nvme.o 00:04:15.976 CXX test/cpp_headers/nvme_intel.o 00:04:15.976 LINK nvmf 00:04:15.976 CXX test/cpp_headers/nvme_ocssd.o 00:04:15.976 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:15.976 CXX test/cpp_headers/nvme_spec.o 00:04:15.976 CXX test/cpp_headers/nvme_zns.o 00:04:15.976 CXX test/cpp_headers/nvmf_cmd.o 00:04:15.976 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:15.976 CXX test/cpp_headers/nvmf.o 00:04:16.236 CXX test/cpp_headers/nvmf_spec.o 00:04:16.236 CXX test/cpp_headers/nvmf_transport.o 00:04:16.236 CXX test/cpp_headers/opal.o 00:04:16.236 CXX test/cpp_headers/opal_spec.o 00:04:16.236 CXX test/cpp_headers/pci_ids.o 00:04:16.236 CXX test/cpp_headers/pipe.o 00:04:16.236 CXX test/cpp_headers/queue.o 00:04:16.236 CXX test/cpp_headers/reduce.o 00:04:16.236 CXX test/cpp_headers/rpc.o 00:04:16.236 CXX test/cpp_headers/scheduler.o 00:04:16.236 CXX test/cpp_headers/scsi.o 00:04:16.236 CXX test/cpp_headers/scsi_spec.o 00:04:16.236 CXX test/cpp_headers/sock.o 00:04:16.496 CXX test/cpp_headers/stdinc.o 00:04:16.496 CXX test/cpp_headers/string.o 00:04:16.496 CXX test/cpp_headers/thread.o 00:04:16.496 CXX test/cpp_headers/trace.o 00:04:16.496 CXX test/cpp_headers/trace_parser.o 00:04:16.496 CXX test/cpp_headers/tree.o 00:04:16.496 CXX test/cpp_headers/ublk.o 00:04:16.496 CXX test/cpp_headers/util.o 00:04:16.496 CXX test/cpp_headers/uuid.o 00:04:16.496 CXX test/cpp_headers/version.o 00:04:16.496 CXX test/cpp_headers/vfio_user_pci.o 00:04:16.496 CXX test/cpp_headers/vfio_user_spec.o 00:04:16.496 CXX test/cpp_headers/vhost.o 00:04:16.496 CXX test/cpp_headers/vmd.o 00:04:16.756 CXX test/cpp_headers/xor.o 00:04:16.756 CXX test/cpp_headers/zipf.o 00:04:17.016 LINK cuse 00:04:19.556 LINK esnap 00:04:19.556 00:04:19.556 real 1m23.052s 00:04:19.556 user 7m29.288s 00:04:19.556 sys 1m29.298s 00:04:19.556 15:31:18 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:19.556 15:31:18 make -- common/autotest_common.sh@10 -- $ set +x 00:04:19.556 ************************************ 00:04:19.556 END TEST make 00:04:19.556 ************************************ 00:04:19.556 15:31:18 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:19.556 15:31:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:19.556 15:31:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:19.556 15:31:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.556 15:31:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:19.556 15:31:18 -- pm/common@44 -- $ pid=5466 00:04:19.556 15:31:18 -- pm/common@50 -- $ kill -TERM 5466 00:04:19.556 15:31:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.556 15:31:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:19.556 15:31:18 -- pm/common@44 -- $ pid=5468 00:04:19.556 15:31:18 -- pm/common@50 -- $ kill -TERM 5468 00:04:19.556 15:31:18 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:19.556 15:31:18 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:19.816 15:31:18 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:19.816 15:31:18 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:19.816 15:31:18 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.816 15:31:18 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.816 15:31:18 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.816 15:31:18 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.816 15:31:18 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.816 15:31:18 -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.816 15:31:18 -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.816 15:31:18 -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.816 15:31:18 -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.817 15:31:18 -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.817 15:31:18 -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.817 15:31:18 -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.817 15:31:18 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.817 15:31:18 -- scripts/common.sh@344 -- # case "$op" in 00:04:19.817 15:31:18 -- scripts/common.sh@345 -- # : 1 00:04:19.817 15:31:18 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.817 15:31:18 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.817 15:31:18 -- scripts/common.sh@365 -- # decimal 1 00:04:19.817 15:31:18 -- scripts/common.sh@353 -- # local d=1 00:04:19.817 15:31:18 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.817 15:31:18 -- scripts/common.sh@355 -- # echo 1 00:04:19.817 15:31:18 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.817 15:31:18 -- scripts/common.sh@366 -- # decimal 2 00:04:19.817 15:31:18 -- scripts/common.sh@353 -- # local d=2 00:04:19.817 15:31:18 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.817 15:31:18 -- scripts/common.sh@355 -- # echo 2 00:04:19.817 15:31:18 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.817 15:31:18 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.817 15:31:18 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.817 15:31:18 -- scripts/common.sh@368 -- # return 0 00:04:19.817 15:31:18 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.817 15:31:18 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.817 --rc genhtml_branch_coverage=1 00:04:19.817 --rc genhtml_function_coverage=1 00:04:19.817 --rc genhtml_legend=1 00:04:19.817 --rc geninfo_all_blocks=1 00:04:19.817 --rc geninfo_unexecuted_blocks=1 00:04:19.817 00:04:19.817 ' 00:04:19.817 15:31:18 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.817 --rc genhtml_branch_coverage=1 00:04:19.817 --rc genhtml_function_coverage=1 00:04:19.817 --rc genhtml_legend=1 00:04:19.817 --rc geninfo_all_blocks=1 00:04:19.817 --rc geninfo_unexecuted_blocks=1 00:04:19.817 00:04:19.817 ' 00:04:19.817 15:31:18 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.817 --rc genhtml_branch_coverage=1 00:04:19.817 --rc genhtml_function_coverage=1 00:04:19.817 --rc genhtml_legend=1 00:04:19.817 --rc geninfo_all_blocks=1 00:04:19.817 --rc geninfo_unexecuted_blocks=1 00:04:19.817 00:04:19.817 ' 00:04:19.817 15:31:18 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.817 --rc genhtml_branch_coverage=1 00:04:19.817 --rc genhtml_function_coverage=1 00:04:19.817 --rc genhtml_legend=1 00:04:19.817 --rc geninfo_all_blocks=1 00:04:19.817 --rc geninfo_unexecuted_blocks=1 00:04:19.817 00:04:19.817 ' 00:04:19.817 15:31:18 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:19.817 15:31:18 -- nvmf/common.sh@7 -- # uname -s 00:04:19.817 15:31:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:19.817 15:31:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:19.817 15:31:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:19.817 15:31:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:19.817 15:31:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:19.817 15:31:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:19.817 15:31:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:19.817 15:31:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:19.817 15:31:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:19.817 15:31:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:19.817 15:31:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29434f4a-7884-441f-8ea4-efd4338b5ac8 00:04:19.817 15:31:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=29434f4a-7884-441f-8ea4-efd4338b5ac8 00:04:19.817 15:31:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:19.817 15:31:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:19.817 15:31:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:19.817 15:31:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:19.817 15:31:18 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:19.817 15:31:18 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:19.817 15:31:18 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:19.817 15:31:18 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:19.817 15:31:18 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:19.817 15:31:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.817 15:31:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.817 15:31:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.817 15:31:18 -- paths/export.sh@5 -- # export PATH 00:04:19.817 15:31:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.817 15:31:18 -- nvmf/common.sh@51 -- # : 0 00:04:19.817 15:31:18 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:19.817 15:31:18 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:19.817 15:31:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:19.817 15:31:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:19.817 15:31:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:19.817 15:31:18 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:19.817 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:19.817 15:31:18 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:19.817 15:31:18 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:19.817 15:31:18 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:19.817 15:31:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:19.817 15:31:18 -- spdk/autotest.sh@32 -- # uname -s 00:04:19.817 15:31:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:19.817 15:31:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:19.817 15:31:18 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:19.817 15:31:18 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:19.817 15:31:18 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:19.817 15:31:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:20.077 15:31:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:20.077 15:31:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:20.077 15:31:18 -- spdk/autotest.sh@48 -- # udevadm_pid=54406 00:04:20.077 15:31:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:20.077 15:31:18 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:20.077 15:31:18 -- pm/common@17 -- # local monitor 00:04:20.077 15:31:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.077 15:31:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.077 15:31:18 -- pm/common@25 -- # sleep 1 00:04:20.077 15:31:18 -- pm/common@21 -- # date +%s 00:04:20.077 15:31:18 -- pm/common@21 -- # date +%s 00:04:20.077 15:31:18 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732548678 00:04:20.077 15:31:18 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732548678 00:04:20.077 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732548678_collect-cpu-load.pm.log 00:04:20.077 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732548678_collect-vmstat.pm.log 00:04:21.015 15:31:19 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:21.015 15:31:19 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:21.015 15:31:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.015 15:31:19 -- common/autotest_common.sh@10 -- # set +x 00:04:21.015 15:31:19 -- spdk/autotest.sh@59 -- # create_test_list 00:04:21.015 15:31:19 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:21.015 15:31:19 -- common/autotest_common.sh@10 -- # set +x 00:04:21.015 15:31:19 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:21.015 15:31:19 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:21.015 15:31:19 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:21.015 15:31:19 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:21.015 15:31:19 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:21.015 15:31:19 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:21.015 15:31:19 -- common/autotest_common.sh@1457 -- # uname 00:04:21.015 15:31:19 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:21.015 15:31:19 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:21.015 15:31:19 -- common/autotest_common.sh@1477 -- # uname 00:04:21.015 15:31:19 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:21.015 15:31:19 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:21.015 15:31:19 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:21.015 lcov: LCOV version 1.15 00:04:21.015 15:31:19 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:35.912 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:35.912 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:48.137 15:31:45 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:48.137 15:31:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:48.137 15:31:45 -- common/autotest_common.sh@10 -- # set +x 00:04:48.137 15:31:45 -- spdk/autotest.sh@78 -- # rm -f 00:04:48.137 15:31:45 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:48.137 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:48.137 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:48.137 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:48.137 15:31:46 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:48.137 15:31:46 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:48.137 15:31:46 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:48.137 15:31:46 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:48.137 15:31:46 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:48.137 15:31:46 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:48.137 15:31:46 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:48.137 15:31:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:48.137 15:31:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:48.137 15:31:46 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:48.137 15:31:46 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:48.137 15:31:46 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:48.137 15:31:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:48.137 15:31:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:48.137 15:31:46 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:48.137 15:31:46 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:48.137 15:31:46 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:48.137 15:31:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:48.137 15:31:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:48.137 15:31:46 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:48.137 15:31:46 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:48.137 15:31:46 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:48.137 15:31:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:48.137 15:31:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:48.137 15:31:46 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:48.137 15:31:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:48.137 15:31:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:48.137 15:31:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:48.137 15:31:46 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:48.137 15:31:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:48.137 No valid GPT data, bailing 00:04:48.137 15:31:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:48.137 15:31:46 -- scripts/common.sh@394 -- # pt= 00:04:48.137 15:31:46 -- scripts/common.sh@395 -- # return 1 00:04:48.137 15:31:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:48.137 1+0 records in 00:04:48.137 1+0 records out 00:04:48.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00594033 s, 177 MB/s 00:04:48.137 15:31:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:48.137 15:31:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:48.137 15:31:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:48.137 15:31:46 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:48.137 15:31:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:48.137 No valid GPT data, bailing 00:04:48.137 15:31:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:48.137 15:31:46 -- scripts/common.sh@394 -- # pt= 00:04:48.137 15:31:46 -- scripts/common.sh@395 -- # return 1 00:04:48.137 15:31:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:48.137 1+0 records in 00:04:48.137 1+0 records out 00:04:48.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00389059 s, 270 MB/s 00:04:48.137 15:31:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:48.137 15:31:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:48.137 15:31:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:48.137 15:31:46 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:48.137 15:31:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:48.137 No valid GPT data, bailing 00:04:48.137 15:31:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:48.137 15:31:46 -- scripts/common.sh@394 -- # pt= 00:04:48.137 15:31:46 -- scripts/common.sh@395 -- # return 1 00:04:48.137 15:31:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:48.137 1+0 records in 00:04:48.137 1+0 records out 00:04:48.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618755 s, 169 MB/s 00:04:48.137 15:31:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:48.137 15:31:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:48.137 15:31:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:48.137 15:31:46 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:48.137 15:31:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:48.137 No valid GPT data, bailing 00:04:48.137 15:31:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:48.137 15:31:46 -- scripts/common.sh@394 -- # pt= 00:04:48.137 15:31:46 -- scripts/common.sh@395 -- # return 1 00:04:48.137 15:31:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:48.137 1+0 records in 00:04:48.137 1+0 records out 00:04:48.138 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00584897 s, 179 MB/s 00:04:48.138 15:31:46 -- spdk/autotest.sh@105 -- # sync 00:04:49.078 15:31:47 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:49.078 15:31:47 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:49.078 15:31:47 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:51.617 15:31:50 -- spdk/autotest.sh@111 -- # uname -s 00:04:51.617 15:31:50 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:51.617 15:31:50 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:51.617 15:31:50 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:52.556 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:52.556 Hugepages 00:04:52.556 node hugesize free / total 00:04:52.556 node0 1048576kB 0 / 0 00:04:52.556 node0 2048kB 0 / 0 00:04:52.556 00:04:52.556 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:52.556 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:52.817 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:52.817 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:52.817 15:31:51 -- spdk/autotest.sh@117 -- # uname -s 00:04:52.817 15:31:51 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:52.817 15:31:51 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:52.817 15:31:51 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:53.761 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.761 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:53.761 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:53.761 15:31:52 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:54.702 15:31:53 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:54.702 15:31:53 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:54.702 15:31:53 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:54.962 15:31:53 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:54.962 15:31:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:54.962 15:31:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:54.962 15:31:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:54.962 15:31:53 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:54.962 15:31:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:54.962 15:31:53 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:54.962 15:31:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:54.962 15:31:53 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:55.533 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.533 Waiting for block devices as requested 00:04:55.533 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:55.533 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:55.793 15:31:54 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:55.793 15:31:54 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:55.793 15:31:54 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:55.793 15:31:54 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:55.793 15:31:54 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:55.793 15:31:54 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:55.793 15:31:54 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:55.793 15:31:54 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:55.793 15:31:54 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:55.793 15:31:54 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:55.793 15:31:54 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:55.793 15:31:54 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:55.793 15:31:54 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:55.793 15:31:54 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:55.793 15:31:54 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:55.793 15:31:54 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:55.793 15:31:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:55.793 15:31:54 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:55.793 15:31:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:55.793 15:31:54 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:55.793 15:31:54 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:55.793 15:31:54 -- common/autotest_common.sh@1543 -- # continue 00:04:55.793 15:31:54 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:55.793 15:31:54 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:55.793 15:31:54 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:55.793 15:31:54 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:55.793 15:31:54 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:55.793 15:31:54 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:55.793 15:31:54 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:55.793 15:31:54 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:55.793 15:31:54 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:55.793 15:31:54 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:55.793 15:31:54 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:55.793 15:31:54 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:55.793 15:31:54 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:55.793 15:31:54 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:55.793 15:31:54 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:55.793 15:31:54 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:55.793 15:31:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:55.793 15:31:54 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:55.793 15:31:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:55.793 15:31:54 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:55.793 15:31:54 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:55.793 15:31:54 -- common/autotest_common.sh@1543 -- # continue 00:04:55.793 15:31:54 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:55.793 15:31:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:55.793 15:31:54 -- common/autotest_common.sh@10 -- # set +x 00:04:55.793 15:31:54 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:55.793 15:31:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:55.793 15:31:54 -- common/autotest_common.sh@10 -- # set +x 00:04:55.793 15:31:54 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:56.732 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.732 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:56.732 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:56.732 15:31:55 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:56.732 15:31:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:56.732 15:31:55 -- common/autotest_common.sh@10 -- # set +x 00:04:56.991 15:31:55 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:56.991 15:31:55 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:56.991 15:31:55 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:56.991 15:31:55 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:56.991 15:31:55 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:56.991 15:31:55 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:56.991 15:31:55 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:56.991 15:31:55 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:56.991 15:31:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:56.991 15:31:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:56.991 15:31:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:56.991 15:31:55 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:56.991 15:31:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:56.991 15:31:55 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:56.991 15:31:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:56.991 15:31:55 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:56.991 15:31:55 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:56.991 15:31:55 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:56.991 15:31:55 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:56.991 15:31:55 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:56.991 15:31:55 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:56.991 15:31:55 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:56.991 15:31:55 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:56.991 15:31:55 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:56.991 15:31:55 -- common/autotest_common.sh@1572 -- # return 0 00:04:56.991 15:31:55 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:56.991 15:31:55 -- common/autotest_common.sh@1580 -- # return 0 00:04:56.991 15:31:55 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:56.991 15:31:55 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:56.991 15:31:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:56.991 15:31:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:56.991 15:31:55 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:56.991 15:31:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.992 15:31:55 -- common/autotest_common.sh@10 -- # set +x 00:04:56.992 15:31:55 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:56.992 15:31:55 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:56.992 15:31:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.992 15:31:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.992 15:31:55 -- common/autotest_common.sh@10 -- # set +x 00:04:56.992 ************************************ 00:04:56.992 START TEST env 00:04:56.992 ************************************ 00:04:56.992 15:31:55 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:56.992 * Looking for test storage... 00:04:56.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:56.992 15:31:55 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:56.992 15:31:55 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:56.992 15:31:55 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.252 15:31:55 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.252 15:31:55 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.252 15:31:55 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.252 15:31:55 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.252 15:31:55 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.252 15:31:55 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.252 15:31:55 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.252 15:31:55 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.252 15:31:55 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.252 15:31:55 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.252 15:31:55 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.252 15:31:55 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.252 15:31:55 env -- scripts/common.sh@344 -- # case "$op" in 00:04:57.252 15:31:55 env -- scripts/common.sh@345 -- # : 1 00:04:57.252 15:31:55 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.252 15:31:55 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.252 15:31:55 env -- scripts/common.sh@365 -- # decimal 1 00:04:57.252 15:31:55 env -- scripts/common.sh@353 -- # local d=1 00:04:57.252 15:31:55 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.252 15:31:55 env -- scripts/common.sh@355 -- # echo 1 00:04:57.252 15:31:55 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.252 15:31:55 env -- scripts/common.sh@366 -- # decimal 2 00:04:57.252 15:31:55 env -- scripts/common.sh@353 -- # local d=2 00:04:57.252 15:31:55 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.252 15:31:55 env -- scripts/common.sh@355 -- # echo 2 00:04:57.252 15:31:55 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.252 15:31:55 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.252 15:31:55 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.252 15:31:55 env -- scripts/common.sh@368 -- # return 0 00:04:57.252 15:31:55 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.252 15:31:55 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.252 --rc genhtml_branch_coverage=1 00:04:57.252 --rc genhtml_function_coverage=1 00:04:57.252 --rc genhtml_legend=1 00:04:57.252 --rc geninfo_all_blocks=1 00:04:57.252 --rc geninfo_unexecuted_blocks=1 00:04:57.252 00:04:57.252 ' 00:04:57.252 15:31:55 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.252 --rc genhtml_branch_coverage=1 00:04:57.252 --rc genhtml_function_coverage=1 00:04:57.252 --rc genhtml_legend=1 00:04:57.252 --rc geninfo_all_blocks=1 00:04:57.252 --rc geninfo_unexecuted_blocks=1 00:04:57.252 00:04:57.252 ' 00:04:57.252 15:31:55 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.252 --rc genhtml_branch_coverage=1 00:04:57.252 --rc genhtml_function_coverage=1 00:04:57.252 --rc genhtml_legend=1 00:04:57.252 --rc geninfo_all_blocks=1 00:04:57.252 --rc geninfo_unexecuted_blocks=1 00:04:57.252 00:04:57.252 ' 00:04:57.252 15:31:55 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.252 --rc genhtml_branch_coverage=1 00:04:57.252 --rc genhtml_function_coverage=1 00:04:57.252 --rc genhtml_legend=1 00:04:57.252 --rc geninfo_all_blocks=1 00:04:57.252 --rc geninfo_unexecuted_blocks=1 00:04:57.252 00:04:57.252 ' 00:04:57.252 15:31:55 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:57.252 15:31:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.252 15:31:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.252 15:31:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.252 ************************************ 00:04:57.252 START TEST env_memory 00:04:57.252 ************************************ 00:04:57.252 15:31:55 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:57.252 00:04:57.252 00:04:57.252 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.252 http://cunit.sourceforge.net/ 00:04:57.252 00:04:57.252 00:04:57.252 Suite: memory 00:04:57.252 Test: alloc and free memory map ...[2024-11-25 15:31:55.827630] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:57.252 passed 00:04:57.252 Test: mem map translation ...[2024-11-25 15:31:55.867971] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:57.252 [2024-11-25 15:31:55.868012] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:57.252 [2024-11-25 15:31:55.868085] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:57.252 [2024-11-25 15:31:55.868103] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:57.252 passed 00:04:57.252 Test: mem map registration ...[2024-11-25 15:31:55.929339] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:57.252 [2024-11-25 15:31:55.929375] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:57.513 passed 00:04:57.513 Test: mem map adjacent registrations ...passed 00:04:57.513 00:04:57.513 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.513 suites 1 1 n/a 0 0 00:04:57.513 tests 4 4 4 0 0 00:04:57.513 asserts 152 152 152 0 n/a 00:04:57.513 00:04:57.513 Elapsed time = 0.218 seconds 00:04:57.513 00:04:57.513 real 0m0.254s 00:04:57.513 user 0m0.228s 00:04:57.513 sys 0m0.021s 00:04:57.513 15:31:56 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.513 15:31:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:57.513 ************************************ 00:04:57.513 END TEST env_memory 00:04:57.513 ************************************ 00:04:57.513 15:31:56 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:57.513 15:31:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.513 15:31:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.513 15:31:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.513 ************************************ 00:04:57.513 START TEST env_vtophys 00:04:57.513 ************************************ 00:04:57.513 15:31:56 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:57.513 EAL: lib.eal log level changed from notice to debug 00:04:57.513 EAL: Detected lcore 0 as core 0 on socket 0 00:04:57.513 EAL: Detected lcore 1 as core 0 on socket 0 00:04:57.513 EAL: Detected lcore 2 as core 0 on socket 0 00:04:57.513 EAL: Detected lcore 3 as core 0 on socket 0 00:04:57.513 EAL: Detected lcore 4 as core 0 on socket 0 00:04:57.513 EAL: Detected lcore 5 as core 0 on socket 0 00:04:57.513 EAL: Detected lcore 6 as core 0 on socket 0 00:04:57.513 EAL: Detected lcore 7 as core 0 on socket 0 00:04:57.513 EAL: Detected lcore 8 as core 0 on socket 0 00:04:57.513 EAL: Detected lcore 9 as core 0 on socket 0 00:04:57.513 EAL: Maximum logical cores by configuration: 128 00:04:57.513 EAL: Detected CPU lcores: 10 00:04:57.513 EAL: Detected NUMA nodes: 1 00:04:57.513 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:57.513 EAL: Detected shared linkage of DPDK 00:04:57.513 EAL: No shared files mode enabled, IPC will be disabled 00:04:57.513 EAL: Selected IOVA mode 'PA' 00:04:57.513 EAL: Probing VFIO support... 00:04:57.513 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:57.513 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:57.513 EAL: Ask a virtual area of 0x2e000 bytes 00:04:57.513 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:57.513 EAL: Setting up physically contiguous memory... 00:04:57.513 EAL: Setting maximum number of open files to 524288 00:04:57.513 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:57.513 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:57.513 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.513 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:57.513 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:57.513 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.513 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:57.513 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:57.513 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.513 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:57.513 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:57.513 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.513 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:57.513 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:57.513 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.513 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:57.513 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:57.513 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.513 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:57.513 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:57.513 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.513 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:57.513 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:57.513 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.513 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:57.513 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:57.513 EAL: Hugepages will be freed exactly as allocated. 00:04:57.513 EAL: No shared files mode enabled, IPC is disabled 00:04:57.513 EAL: No shared files mode enabled, IPC is disabled 00:04:57.773 EAL: TSC frequency is ~2290000 KHz 00:04:57.773 EAL: Main lcore 0 is ready (tid=7fdefcc55a40;cpuset=[0]) 00:04:57.773 EAL: Trying to obtain current memory policy. 00:04:57.773 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.773 EAL: Restoring previous memory policy: 0 00:04:57.773 EAL: request: mp_malloc_sync 00:04:57.773 EAL: No shared files mode enabled, IPC is disabled 00:04:57.773 EAL: Heap on socket 0 was expanded by 2MB 00:04:57.773 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:57.773 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:57.773 EAL: Mem event callback 'spdk:(nil)' registered 00:04:57.773 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:57.773 00:04:57.773 00:04:57.773 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.773 http://cunit.sourceforge.net/ 00:04:57.773 00:04:57.773 00:04:57.773 Suite: components_suite 00:04:58.032 Test: vtophys_malloc_test ...passed 00:04:58.032 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:58.032 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.032 EAL: Restoring previous memory policy: 4 00:04:58.032 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.032 EAL: request: mp_malloc_sync 00:04:58.032 EAL: No shared files mode enabled, IPC is disabled 00:04:58.032 EAL: Heap on socket 0 was expanded by 4MB 00:04:58.032 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.032 EAL: request: mp_malloc_sync 00:04:58.032 EAL: No shared files mode enabled, IPC is disabled 00:04:58.032 EAL: Heap on socket 0 was shrunk by 4MB 00:04:58.032 EAL: Trying to obtain current memory policy. 00:04:58.032 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.032 EAL: Restoring previous memory policy: 4 00:04:58.032 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.032 EAL: request: mp_malloc_sync 00:04:58.032 EAL: No shared files mode enabled, IPC is disabled 00:04:58.032 EAL: Heap on socket 0 was expanded by 6MB 00:04:58.032 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.032 EAL: request: mp_malloc_sync 00:04:58.032 EAL: No shared files mode enabled, IPC is disabled 00:04:58.032 EAL: Heap on socket 0 was shrunk by 6MB 00:04:58.032 EAL: Trying to obtain current memory policy. 00:04:58.032 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.032 EAL: Restoring previous memory policy: 4 00:04:58.032 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.032 EAL: request: mp_malloc_sync 00:04:58.033 EAL: No shared files mode enabled, IPC is disabled 00:04:58.033 EAL: Heap on socket 0 was expanded by 10MB 00:04:58.033 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.033 EAL: request: mp_malloc_sync 00:04:58.033 EAL: No shared files mode enabled, IPC is disabled 00:04:58.033 EAL: Heap on socket 0 was shrunk by 10MB 00:04:58.033 EAL: Trying to obtain current memory policy. 00:04:58.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.033 EAL: Restoring previous memory policy: 4 00:04:58.033 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.033 EAL: request: mp_malloc_sync 00:04:58.033 EAL: No shared files mode enabled, IPC is disabled 00:04:58.033 EAL: Heap on socket 0 was expanded by 18MB 00:04:58.293 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.293 EAL: request: mp_malloc_sync 00:04:58.293 EAL: No shared files mode enabled, IPC is disabled 00:04:58.293 EAL: Heap on socket 0 was shrunk by 18MB 00:04:58.293 EAL: Trying to obtain current memory policy. 00:04:58.293 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.293 EAL: Restoring previous memory policy: 4 00:04:58.293 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.293 EAL: request: mp_malloc_sync 00:04:58.293 EAL: No shared files mode enabled, IPC is disabled 00:04:58.293 EAL: Heap on socket 0 was expanded by 34MB 00:04:58.293 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.293 EAL: request: mp_malloc_sync 00:04:58.293 EAL: No shared files mode enabled, IPC is disabled 00:04:58.293 EAL: Heap on socket 0 was shrunk by 34MB 00:04:58.293 EAL: Trying to obtain current memory policy. 00:04:58.293 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.293 EAL: Restoring previous memory policy: 4 00:04:58.293 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.293 EAL: request: mp_malloc_sync 00:04:58.293 EAL: No shared files mode enabled, IPC is disabled 00:04:58.293 EAL: Heap on socket 0 was expanded by 66MB 00:04:58.293 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.553 EAL: request: mp_malloc_sync 00:04:58.553 EAL: No shared files mode enabled, IPC is disabled 00:04:58.553 EAL: Heap on socket 0 was shrunk by 66MB 00:04:58.553 EAL: Trying to obtain current memory policy. 00:04:58.553 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.553 EAL: Restoring previous memory policy: 4 00:04:58.553 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.553 EAL: request: mp_malloc_sync 00:04:58.553 EAL: No shared files mode enabled, IPC is disabled 00:04:58.553 EAL: Heap on socket 0 was expanded by 130MB 00:04:58.812 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.812 EAL: request: mp_malloc_sync 00:04:58.812 EAL: No shared files mode enabled, IPC is disabled 00:04:58.812 EAL: Heap on socket 0 was shrunk by 130MB 00:04:59.072 EAL: Trying to obtain current memory policy. 00:04:59.072 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.072 EAL: Restoring previous memory policy: 4 00:04:59.072 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.072 EAL: request: mp_malloc_sync 00:04:59.072 EAL: No shared files mode enabled, IPC is disabled 00:04:59.072 EAL: Heap on socket 0 was expanded by 258MB 00:04:59.331 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.590 EAL: request: mp_malloc_sync 00:04:59.590 EAL: No shared files mode enabled, IPC is disabled 00:04:59.590 EAL: Heap on socket 0 was shrunk by 258MB 00:04:59.849 EAL: Trying to obtain current memory policy. 00:04:59.849 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.108 EAL: Restoring previous memory policy: 4 00:05:00.108 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.108 EAL: request: mp_malloc_sync 00:05:00.108 EAL: No shared files mode enabled, IPC is disabled 00:05:00.108 EAL: Heap on socket 0 was expanded by 514MB 00:05:01.065 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.065 EAL: request: mp_malloc_sync 00:05:01.065 EAL: No shared files mode enabled, IPC is disabled 00:05:01.065 EAL: Heap on socket 0 was shrunk by 514MB 00:05:01.635 EAL: Trying to obtain current memory policy. 00:05:01.635 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.895 EAL: Restoring previous memory policy: 4 00:05:01.895 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.895 EAL: request: mp_malloc_sync 00:05:01.895 EAL: No shared files mode enabled, IPC is disabled 00:05:01.895 EAL: Heap on socket 0 was expanded by 1026MB 00:05:03.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.805 EAL: request: mp_malloc_sync 00:05:03.805 EAL: No shared files mode enabled, IPC is disabled 00:05:03.805 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:05.187 passed 00:05:05.187 00:05:05.187 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.187 suites 1 1 n/a 0 0 00:05:05.187 tests 2 2 2 0 0 00:05:05.187 asserts 5551 5551 5551 0 n/a 00:05:05.187 00:05:05.187 Elapsed time = 7.471 seconds 00:05:05.187 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.187 EAL: request: mp_malloc_sync 00:05:05.187 EAL: No shared files mode enabled, IPC is disabled 00:05:05.187 EAL: Heap on socket 0 was shrunk by 2MB 00:05:05.187 EAL: No shared files mode enabled, IPC is disabled 00:05:05.187 EAL: No shared files mode enabled, IPC is disabled 00:05:05.187 EAL: No shared files mode enabled, IPC is disabled 00:05:05.569 00:05:05.569 real 0m7.788s 00:05:05.569 user 0m6.890s 00:05:05.569 sys 0m0.749s 00:05:05.569 15:32:03 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.569 15:32:03 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:05.569 ************************************ 00:05:05.569 END TEST env_vtophys 00:05:05.569 ************************************ 00:05:05.569 15:32:03 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:05.569 15:32:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.569 15:32:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.569 15:32:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.569 ************************************ 00:05:05.569 START TEST env_pci 00:05:05.569 ************************************ 00:05:05.569 15:32:03 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:05.569 00:05:05.569 00:05:05.569 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.569 http://cunit.sourceforge.net/ 00:05:05.569 00:05:05.569 00:05:05.569 Suite: pci 00:05:05.569 Test: pci_hook ...[2024-11-25 15:32:03.995541] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56695 has claimed it 00:05:05.569 passed 00:05:05.569 00:05:05.569 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.569 suites 1 1 n/a 0 0 00:05:05.569 tests 1 1 1 0 0 00:05:05.569 asserts 25 25 25 0 n/a 00:05:05.569 00:05:05.569 Elapsed time = 0.005 seconds 00:05:05.569 EAL: Cannot find device (10000:00:01.0) 00:05:05.569 EAL: Failed to attach device on primary process 00:05:05.569 00:05:05.569 real 0m0.108s 00:05:05.569 user 0m0.056s 00:05:05.569 sys 0m0.052s 00:05:05.569 15:32:04 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.569 15:32:04 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:05.569 ************************************ 00:05:05.569 END TEST env_pci 00:05:05.569 ************************************ 00:05:05.569 15:32:04 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:05.569 15:32:04 env -- env/env.sh@15 -- # uname 00:05:05.569 15:32:04 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:05.569 15:32:04 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:05.569 15:32:04 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:05.569 15:32:04 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:05.569 15:32:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.569 15:32:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.569 ************************************ 00:05:05.569 START TEST env_dpdk_post_init 00:05:05.569 ************************************ 00:05:05.569 15:32:04 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:05.569 EAL: Detected CPU lcores: 10 00:05:05.569 EAL: Detected NUMA nodes: 1 00:05:05.569 EAL: Detected shared linkage of DPDK 00:05:05.569 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:05.569 EAL: Selected IOVA mode 'PA' 00:05:05.855 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:05.855 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:05.855 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:05.855 Starting DPDK initialization... 00:05:05.855 Starting SPDK post initialization... 00:05:05.855 SPDK NVMe probe 00:05:05.855 Attaching to 0000:00:10.0 00:05:05.855 Attaching to 0000:00:11.0 00:05:05.855 Attached to 0000:00:10.0 00:05:05.855 Attached to 0000:00:11.0 00:05:05.855 Cleaning up... 00:05:05.855 00:05:05.855 real 0m0.278s 00:05:05.855 user 0m0.089s 00:05:05.855 sys 0m0.090s 00:05:05.855 15:32:04 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.855 ************************************ 00:05:05.855 15:32:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:05.855 END TEST env_dpdk_post_init 00:05:05.855 ************************************ 00:05:05.855 15:32:04 env -- env/env.sh@26 -- # uname 00:05:05.855 15:32:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:05.855 15:32:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:05.855 15:32:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.855 15:32:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.855 15:32:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.855 ************************************ 00:05:05.855 START TEST env_mem_callbacks 00:05:05.855 ************************************ 00:05:05.855 15:32:04 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:06.115 EAL: Detected CPU lcores: 10 00:05:06.115 EAL: Detected NUMA nodes: 1 00:05:06.115 EAL: Detected shared linkage of DPDK 00:05:06.115 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:06.115 EAL: Selected IOVA mode 'PA' 00:05:06.115 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:06.115 00:05:06.115 00:05:06.115 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.115 http://cunit.sourceforge.net/ 00:05:06.115 00:05:06.115 00:05:06.115 Suite: memory 00:05:06.115 Test: test ... 00:05:06.115 register 0x200000200000 2097152 00:05:06.115 malloc 3145728 00:05:06.115 register 0x200000400000 4194304 00:05:06.115 buf 0x2000004fffc0 len 3145728 PASSED 00:05:06.115 malloc 64 00:05:06.115 buf 0x2000004ffec0 len 64 PASSED 00:05:06.115 malloc 4194304 00:05:06.115 register 0x200000800000 6291456 00:05:06.115 buf 0x2000009fffc0 len 4194304 PASSED 00:05:06.115 free 0x2000004fffc0 3145728 00:05:06.115 free 0x2000004ffec0 64 00:05:06.115 unregister 0x200000400000 4194304 PASSED 00:05:06.115 free 0x2000009fffc0 4194304 00:05:06.115 unregister 0x200000800000 6291456 PASSED 00:05:06.115 malloc 8388608 00:05:06.115 register 0x200000400000 10485760 00:05:06.115 buf 0x2000005fffc0 len 8388608 PASSED 00:05:06.115 free 0x2000005fffc0 8388608 00:05:06.115 unregister 0x200000400000 10485760 PASSED 00:05:06.115 passed 00:05:06.115 00:05:06.115 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.115 suites 1 1 n/a 0 0 00:05:06.115 tests 1 1 1 0 0 00:05:06.115 asserts 15 15 15 0 n/a 00:05:06.115 00:05:06.115 Elapsed time = 0.083 seconds 00:05:06.115 00:05:06.115 real 0m0.277s 00:05:06.115 user 0m0.113s 00:05:06.115 sys 0m0.062s 00:05:06.115 15:32:04 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.115 15:32:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:06.115 ************************************ 00:05:06.115 END TEST env_mem_callbacks 00:05:06.115 ************************************ 00:05:06.375 ************************************ 00:05:06.375 END TEST env 00:05:06.375 ************************************ 00:05:06.375 00:05:06.375 real 0m9.273s 00:05:06.375 user 0m7.585s 00:05:06.375 sys 0m1.341s 00:05:06.375 15:32:04 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.375 15:32:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.375 15:32:04 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:06.375 15:32:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.375 15:32:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.375 15:32:04 -- common/autotest_common.sh@10 -- # set +x 00:05:06.375 ************************************ 00:05:06.375 START TEST rpc 00:05:06.375 ************************************ 00:05:06.375 15:32:04 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:06.375 * Looking for test storage... 00:05:06.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:06.375 15:32:04 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:06.375 15:32:04 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:06.375 15:32:04 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:06.635 15:32:05 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:06.635 15:32:05 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.635 15:32:05 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.635 15:32:05 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.635 15:32:05 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.635 15:32:05 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.635 15:32:05 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.635 15:32:05 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.635 15:32:05 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.635 15:32:05 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.635 15:32:05 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.635 15:32:05 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.635 15:32:05 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:06.635 15:32:05 rpc -- scripts/common.sh@345 -- # : 1 00:05:06.635 15:32:05 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.635 15:32:05 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.635 15:32:05 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:06.635 15:32:05 rpc -- scripts/common.sh@353 -- # local d=1 00:05:06.635 15:32:05 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.635 15:32:05 rpc -- scripts/common.sh@355 -- # echo 1 00:05:06.635 15:32:05 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.635 15:32:05 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:06.635 15:32:05 rpc -- scripts/common.sh@353 -- # local d=2 00:05:06.635 15:32:05 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.635 15:32:05 rpc -- scripts/common.sh@355 -- # echo 2 00:05:06.635 15:32:05 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.635 15:32:05 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.635 15:32:05 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.635 15:32:05 rpc -- scripts/common.sh@368 -- # return 0 00:05:06.635 15:32:05 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.635 15:32:05 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:06.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.635 --rc genhtml_branch_coverage=1 00:05:06.635 --rc genhtml_function_coverage=1 00:05:06.635 --rc genhtml_legend=1 00:05:06.635 --rc geninfo_all_blocks=1 00:05:06.635 --rc geninfo_unexecuted_blocks=1 00:05:06.635 00:05:06.635 ' 00:05:06.635 15:32:05 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:06.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.635 --rc genhtml_branch_coverage=1 00:05:06.635 --rc genhtml_function_coverage=1 00:05:06.635 --rc genhtml_legend=1 00:05:06.635 --rc geninfo_all_blocks=1 00:05:06.635 --rc geninfo_unexecuted_blocks=1 00:05:06.635 00:05:06.635 ' 00:05:06.635 15:32:05 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:06.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.635 --rc genhtml_branch_coverage=1 00:05:06.635 --rc genhtml_function_coverage=1 00:05:06.635 --rc genhtml_legend=1 00:05:06.635 --rc geninfo_all_blocks=1 00:05:06.635 --rc geninfo_unexecuted_blocks=1 00:05:06.635 00:05:06.635 ' 00:05:06.635 15:32:05 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:06.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.635 --rc genhtml_branch_coverage=1 00:05:06.635 --rc genhtml_function_coverage=1 00:05:06.635 --rc genhtml_legend=1 00:05:06.635 --rc geninfo_all_blocks=1 00:05:06.635 --rc geninfo_unexecuted_blocks=1 00:05:06.635 00:05:06.635 ' 00:05:06.635 15:32:05 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56822 00:05:06.635 15:32:05 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:06.635 15:32:05 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.635 15:32:05 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56822 00:05:06.635 15:32:05 rpc -- common/autotest_common.sh@835 -- # '[' -z 56822 ']' 00:05:06.635 15:32:05 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.635 15:32:05 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.635 15:32:05 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.635 15:32:05 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.635 15:32:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.635 [2024-11-25 15:32:05.189528] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:05:06.635 [2024-11-25 15:32:05.189639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56822 ] 00:05:06.895 [2024-11-25 15:32:05.362909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.895 [2024-11-25 15:32:05.471391] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:06.895 [2024-11-25 15:32:05.471453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56822' to capture a snapshot of events at runtime. 00:05:06.895 [2024-11-25 15:32:05.471462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:06.895 [2024-11-25 15:32:05.471486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:06.895 [2024-11-25 15:32:05.471494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56822 for offline analysis/debug. 00:05:06.895 [2024-11-25 15:32:05.472608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.835 15:32:06 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.835 15:32:06 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.835 15:32:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:07.835 15:32:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:07.835 15:32:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:07.835 15:32:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:07.835 15:32:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.835 15:32:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.835 15:32:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.835 ************************************ 00:05:07.835 START TEST rpc_integrity 00:05:07.835 ************************************ 00:05:07.835 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:07.835 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:07.835 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.835 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.835 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.835 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:07.835 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:07.835 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:07.835 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:07.835 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.835 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.835 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.835 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:07.835 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:07.835 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.835 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.835 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.835 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:07.835 { 00:05:07.835 "name": "Malloc0", 00:05:07.835 "aliases": [ 00:05:07.835 "7c72d4bc-0cdc-438a-becf-408a919c04bb" 00:05:07.835 ], 00:05:07.835 "product_name": "Malloc disk", 00:05:07.835 "block_size": 512, 00:05:07.835 "num_blocks": 16384, 00:05:07.835 "uuid": "7c72d4bc-0cdc-438a-becf-408a919c04bb", 00:05:07.835 "assigned_rate_limits": { 00:05:07.835 "rw_ios_per_sec": 0, 00:05:07.835 "rw_mbytes_per_sec": 0, 00:05:07.835 "r_mbytes_per_sec": 0, 00:05:07.835 "w_mbytes_per_sec": 0 00:05:07.835 }, 00:05:07.835 "claimed": false, 00:05:07.835 "zoned": false, 00:05:07.835 "supported_io_types": { 00:05:07.835 "read": true, 00:05:07.835 "write": true, 00:05:07.835 "unmap": true, 00:05:07.835 "flush": true, 00:05:07.835 "reset": true, 00:05:07.835 "nvme_admin": false, 00:05:07.835 "nvme_io": false, 00:05:07.835 "nvme_io_md": false, 00:05:07.835 "write_zeroes": true, 00:05:07.835 "zcopy": true, 00:05:07.835 "get_zone_info": false, 00:05:07.835 "zone_management": false, 00:05:07.835 "zone_append": false, 00:05:07.835 "compare": false, 00:05:07.835 "compare_and_write": false, 00:05:07.835 "abort": true, 00:05:07.835 "seek_hole": false, 00:05:07.835 "seek_data": false, 00:05:07.835 "copy": true, 00:05:07.835 "nvme_iov_md": false 00:05:07.835 }, 00:05:07.835 "memory_domains": [ 00:05:07.835 { 00:05:07.835 "dma_device_id": "system", 00:05:07.835 "dma_device_type": 1 00:05:07.835 }, 00:05:07.835 { 00:05:07.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.835 "dma_device_type": 2 00:05:07.835 } 00:05:07.835 ], 00:05:07.835 "driver_specific": {} 00:05:07.835 } 00:05:07.835 ]' 00:05:07.835 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:07.835 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:07.835 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:07.835 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.835 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.835 [2024-11-25 15:32:06.459783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:07.835 [2024-11-25 15:32:06.459844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:07.835 [2024-11-25 15:32:06.459863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:07.835 [2024-11-25 15:32:06.459877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:07.835 [2024-11-25 15:32:06.461961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:07.836 [2024-11-25 15:32:06.462002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:07.836 Passthru0 00:05:07.836 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.836 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:07.836 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.836 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.836 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.836 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:07.836 { 00:05:07.836 "name": "Malloc0", 00:05:07.836 "aliases": [ 00:05:07.836 "7c72d4bc-0cdc-438a-becf-408a919c04bb" 00:05:07.836 ], 00:05:07.836 "product_name": "Malloc disk", 00:05:07.836 "block_size": 512, 00:05:07.836 "num_blocks": 16384, 00:05:07.836 "uuid": "7c72d4bc-0cdc-438a-becf-408a919c04bb", 00:05:07.836 "assigned_rate_limits": { 00:05:07.836 "rw_ios_per_sec": 0, 00:05:07.836 "rw_mbytes_per_sec": 0, 00:05:07.836 "r_mbytes_per_sec": 0, 00:05:07.836 "w_mbytes_per_sec": 0 00:05:07.836 }, 00:05:07.836 "claimed": true, 00:05:07.836 "claim_type": "exclusive_write", 00:05:07.836 "zoned": false, 00:05:07.836 "supported_io_types": { 00:05:07.836 "read": true, 00:05:07.836 "write": true, 00:05:07.836 "unmap": true, 00:05:07.836 "flush": true, 00:05:07.836 "reset": true, 00:05:07.836 "nvme_admin": false, 00:05:07.836 "nvme_io": false, 00:05:07.836 "nvme_io_md": false, 00:05:07.836 "write_zeroes": true, 00:05:07.836 "zcopy": true, 00:05:07.836 "get_zone_info": false, 00:05:07.836 "zone_management": false, 00:05:07.836 "zone_append": false, 00:05:07.836 "compare": false, 00:05:07.836 "compare_and_write": false, 00:05:07.836 "abort": true, 00:05:07.836 "seek_hole": false, 00:05:07.836 "seek_data": false, 00:05:07.836 "copy": true, 00:05:07.836 "nvme_iov_md": false 00:05:07.836 }, 00:05:07.836 "memory_domains": [ 00:05:07.836 { 00:05:07.836 "dma_device_id": "system", 00:05:07.836 "dma_device_type": 1 00:05:07.836 }, 00:05:07.836 { 00:05:07.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.836 "dma_device_type": 2 00:05:07.836 } 00:05:07.836 ], 00:05:07.836 "driver_specific": {} 00:05:07.836 }, 00:05:07.836 { 00:05:07.836 "name": "Passthru0", 00:05:07.836 "aliases": [ 00:05:07.836 "07ebca8e-c126-5f67-b289-45d88b5548fd" 00:05:07.836 ], 00:05:07.836 "product_name": "passthru", 00:05:07.836 "block_size": 512, 00:05:07.836 "num_blocks": 16384, 00:05:07.836 "uuid": "07ebca8e-c126-5f67-b289-45d88b5548fd", 00:05:07.836 "assigned_rate_limits": { 00:05:07.836 "rw_ios_per_sec": 0, 00:05:07.836 "rw_mbytes_per_sec": 0, 00:05:07.836 "r_mbytes_per_sec": 0, 00:05:07.836 "w_mbytes_per_sec": 0 00:05:07.836 }, 00:05:07.836 "claimed": false, 00:05:07.836 "zoned": false, 00:05:07.836 "supported_io_types": { 00:05:07.836 "read": true, 00:05:07.836 "write": true, 00:05:07.836 "unmap": true, 00:05:07.836 "flush": true, 00:05:07.836 "reset": true, 00:05:07.836 "nvme_admin": false, 00:05:07.836 "nvme_io": false, 00:05:07.836 "nvme_io_md": false, 00:05:07.836 "write_zeroes": true, 00:05:07.836 "zcopy": true, 00:05:07.836 "get_zone_info": false, 00:05:07.836 "zone_management": false, 00:05:07.836 "zone_append": false, 00:05:07.836 "compare": false, 00:05:07.836 "compare_and_write": false, 00:05:07.836 "abort": true, 00:05:07.836 "seek_hole": false, 00:05:07.836 "seek_data": false, 00:05:07.836 "copy": true, 00:05:07.836 "nvme_iov_md": false 00:05:07.836 }, 00:05:07.836 "memory_domains": [ 00:05:07.836 { 00:05:07.836 "dma_device_id": "system", 00:05:07.836 "dma_device_type": 1 00:05:07.836 }, 00:05:07.836 { 00:05:07.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.836 "dma_device_type": 2 00:05:07.836 } 00:05:07.836 ], 00:05:07.836 "driver_specific": { 00:05:07.836 "passthru": { 00:05:07.836 "name": "Passthru0", 00:05:07.836 "base_bdev_name": "Malloc0" 00:05:07.836 } 00:05:07.836 } 00:05:07.836 } 00:05:07.836 ]' 00:05:07.836 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:08.097 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:08.097 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:08.097 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.097 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.097 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.097 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:08.097 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.097 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.097 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.097 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:08.097 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.097 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.097 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.097 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:08.097 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:08.097 15:32:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:08.097 00:05:08.097 real 0m0.352s 00:05:08.097 user 0m0.191s 00:05:08.097 sys 0m0.061s 00:05:08.097 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.097 15:32:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.097 ************************************ 00:05:08.097 END TEST rpc_integrity 00:05:08.097 ************************************ 00:05:08.097 15:32:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:08.097 15:32:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.097 15:32:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.097 15:32:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.097 ************************************ 00:05:08.097 START TEST rpc_plugins 00:05:08.097 ************************************ 00:05:08.097 15:32:06 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:08.097 15:32:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:08.097 15:32:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.097 15:32:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.097 15:32:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.097 15:32:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:08.097 15:32:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:08.097 15:32:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.097 15:32:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.097 15:32:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.097 15:32:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:08.097 { 00:05:08.097 "name": "Malloc1", 00:05:08.097 "aliases": [ 00:05:08.097 "f12bebd0-5fe0-44a4-a1ab-f7d148e233f1" 00:05:08.097 ], 00:05:08.097 "product_name": "Malloc disk", 00:05:08.097 "block_size": 4096, 00:05:08.097 "num_blocks": 256, 00:05:08.097 "uuid": "f12bebd0-5fe0-44a4-a1ab-f7d148e233f1", 00:05:08.097 "assigned_rate_limits": { 00:05:08.097 "rw_ios_per_sec": 0, 00:05:08.097 "rw_mbytes_per_sec": 0, 00:05:08.097 "r_mbytes_per_sec": 0, 00:05:08.097 "w_mbytes_per_sec": 0 00:05:08.097 }, 00:05:08.097 "claimed": false, 00:05:08.097 "zoned": false, 00:05:08.097 "supported_io_types": { 00:05:08.097 "read": true, 00:05:08.097 "write": true, 00:05:08.097 "unmap": true, 00:05:08.097 "flush": true, 00:05:08.097 "reset": true, 00:05:08.097 "nvme_admin": false, 00:05:08.097 "nvme_io": false, 00:05:08.097 "nvme_io_md": false, 00:05:08.097 "write_zeroes": true, 00:05:08.097 "zcopy": true, 00:05:08.097 "get_zone_info": false, 00:05:08.097 "zone_management": false, 00:05:08.097 "zone_append": false, 00:05:08.097 "compare": false, 00:05:08.097 "compare_and_write": false, 00:05:08.097 "abort": true, 00:05:08.097 "seek_hole": false, 00:05:08.097 "seek_data": false, 00:05:08.097 "copy": true, 00:05:08.097 "nvme_iov_md": false 00:05:08.097 }, 00:05:08.097 "memory_domains": [ 00:05:08.097 { 00:05:08.097 "dma_device_id": "system", 00:05:08.097 "dma_device_type": 1 00:05:08.097 }, 00:05:08.097 { 00:05:08.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.097 "dma_device_type": 2 00:05:08.097 } 00:05:08.097 ], 00:05:08.097 "driver_specific": {} 00:05:08.097 } 00:05:08.097 ]' 00:05:08.097 15:32:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:08.358 15:32:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:08.358 15:32:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:08.358 15:32:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.358 15:32:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.358 15:32:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.358 15:32:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:08.358 15:32:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.358 15:32:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.358 15:32:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.358 15:32:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:08.358 15:32:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:08.358 15:32:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:08.358 00:05:08.358 real 0m0.177s 00:05:08.358 user 0m0.097s 00:05:08.358 sys 0m0.032s 00:05:08.358 15:32:06 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.358 15:32:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.358 ************************************ 00:05:08.358 END TEST rpc_plugins 00:05:08.358 ************************************ 00:05:08.358 15:32:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:08.358 15:32:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.358 15:32:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.358 15:32:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.358 ************************************ 00:05:08.358 START TEST rpc_trace_cmd_test 00:05:08.358 ************************************ 00:05:08.358 15:32:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:08.358 15:32:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:08.358 15:32:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:08.358 15:32:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.358 15:32:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:08.358 15:32:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.358 15:32:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:08.358 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56822", 00:05:08.358 "tpoint_group_mask": "0x8", 00:05:08.358 "iscsi_conn": { 00:05:08.358 "mask": "0x2", 00:05:08.358 "tpoint_mask": "0x0" 00:05:08.358 }, 00:05:08.358 "scsi": { 00:05:08.358 "mask": "0x4", 00:05:08.358 "tpoint_mask": "0x0" 00:05:08.358 }, 00:05:08.358 "bdev": { 00:05:08.358 "mask": "0x8", 00:05:08.358 "tpoint_mask": "0xffffffffffffffff" 00:05:08.358 }, 00:05:08.358 "nvmf_rdma": { 00:05:08.358 "mask": "0x10", 00:05:08.358 "tpoint_mask": "0x0" 00:05:08.358 }, 00:05:08.358 "nvmf_tcp": { 00:05:08.358 "mask": "0x20", 00:05:08.358 "tpoint_mask": "0x0" 00:05:08.358 }, 00:05:08.358 "ftl": { 00:05:08.358 "mask": "0x40", 00:05:08.358 "tpoint_mask": "0x0" 00:05:08.358 }, 00:05:08.358 "blobfs": { 00:05:08.358 "mask": "0x80", 00:05:08.358 "tpoint_mask": "0x0" 00:05:08.358 }, 00:05:08.358 "dsa": { 00:05:08.358 "mask": "0x200", 00:05:08.358 "tpoint_mask": "0x0" 00:05:08.358 }, 00:05:08.358 "thread": { 00:05:08.358 "mask": "0x400", 00:05:08.358 "tpoint_mask": "0x0" 00:05:08.358 }, 00:05:08.358 "nvme_pcie": { 00:05:08.358 "mask": "0x800", 00:05:08.358 "tpoint_mask": "0x0" 00:05:08.358 }, 00:05:08.358 "iaa": { 00:05:08.358 "mask": "0x1000", 00:05:08.358 "tpoint_mask": "0x0" 00:05:08.358 }, 00:05:08.358 "nvme_tcp": { 00:05:08.358 "mask": "0x2000", 00:05:08.358 "tpoint_mask": "0x0" 00:05:08.358 }, 00:05:08.358 "bdev_nvme": { 00:05:08.358 "mask": "0x4000", 00:05:08.358 "tpoint_mask": "0x0" 00:05:08.358 }, 00:05:08.358 "sock": { 00:05:08.358 "mask": "0x8000", 00:05:08.358 "tpoint_mask": "0x0" 00:05:08.358 }, 00:05:08.358 "blob": { 00:05:08.358 "mask": "0x10000", 00:05:08.358 "tpoint_mask": "0x0" 00:05:08.358 }, 00:05:08.358 "bdev_raid": { 00:05:08.358 "mask": "0x20000", 00:05:08.358 "tpoint_mask": "0x0" 00:05:08.358 }, 00:05:08.358 "scheduler": { 00:05:08.358 "mask": "0x40000", 00:05:08.358 "tpoint_mask": "0x0" 00:05:08.358 } 00:05:08.358 }' 00:05:08.358 15:32:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:08.358 15:32:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:08.358 15:32:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:08.618 15:32:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:08.618 15:32:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:08.618 15:32:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:08.618 15:32:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:08.618 15:32:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:08.618 15:32:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:08.618 15:32:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:08.618 00:05:08.618 real 0m0.239s 00:05:08.618 user 0m0.194s 00:05:08.618 sys 0m0.037s 00:05:08.618 15:32:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.618 15:32:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:08.618 ************************************ 00:05:08.618 END TEST rpc_trace_cmd_test 00:05:08.618 ************************************ 00:05:08.618 15:32:07 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:08.618 15:32:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:08.618 15:32:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:08.618 15:32:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.618 15:32:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.618 15:32:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.618 ************************************ 00:05:08.618 START TEST rpc_daemon_integrity 00:05:08.618 ************************************ 00:05:08.618 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:08.618 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:08.618 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.618 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.618 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.618 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:08.618 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:08.879 { 00:05:08.879 "name": "Malloc2", 00:05:08.879 "aliases": [ 00:05:08.879 "5ba82ec5-6fbb-420c-874c-3beb67cca339" 00:05:08.879 ], 00:05:08.879 "product_name": "Malloc disk", 00:05:08.879 "block_size": 512, 00:05:08.879 "num_blocks": 16384, 00:05:08.879 "uuid": "5ba82ec5-6fbb-420c-874c-3beb67cca339", 00:05:08.879 "assigned_rate_limits": { 00:05:08.879 "rw_ios_per_sec": 0, 00:05:08.879 "rw_mbytes_per_sec": 0, 00:05:08.879 "r_mbytes_per_sec": 0, 00:05:08.879 "w_mbytes_per_sec": 0 00:05:08.879 }, 00:05:08.879 "claimed": false, 00:05:08.879 "zoned": false, 00:05:08.879 "supported_io_types": { 00:05:08.879 "read": true, 00:05:08.879 "write": true, 00:05:08.879 "unmap": true, 00:05:08.879 "flush": true, 00:05:08.879 "reset": true, 00:05:08.879 "nvme_admin": false, 00:05:08.879 "nvme_io": false, 00:05:08.879 "nvme_io_md": false, 00:05:08.879 "write_zeroes": true, 00:05:08.879 "zcopy": true, 00:05:08.879 "get_zone_info": false, 00:05:08.879 "zone_management": false, 00:05:08.879 "zone_append": false, 00:05:08.879 "compare": false, 00:05:08.879 "compare_and_write": false, 00:05:08.879 "abort": true, 00:05:08.879 "seek_hole": false, 00:05:08.879 "seek_data": false, 00:05:08.879 "copy": true, 00:05:08.879 "nvme_iov_md": false 00:05:08.879 }, 00:05:08.879 "memory_domains": [ 00:05:08.879 { 00:05:08.879 "dma_device_id": "system", 00:05:08.879 "dma_device_type": 1 00:05:08.879 }, 00:05:08.879 { 00:05:08.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.879 "dma_device_type": 2 00:05:08.879 } 00:05:08.879 ], 00:05:08.879 "driver_specific": {} 00:05:08.879 } 00:05:08.879 ]' 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.879 [2024-11-25 15:32:07.397518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:08.879 [2024-11-25 15:32:07.397571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:08.879 [2024-11-25 15:32:07.397591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:08.879 [2024-11-25 15:32:07.397601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:08.879 [2024-11-25 15:32:07.399762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:08.879 [2024-11-25 15:32:07.399803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:08.879 Passthru0 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.879 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:08.879 { 00:05:08.879 "name": "Malloc2", 00:05:08.879 "aliases": [ 00:05:08.879 "5ba82ec5-6fbb-420c-874c-3beb67cca339" 00:05:08.879 ], 00:05:08.879 "product_name": "Malloc disk", 00:05:08.879 "block_size": 512, 00:05:08.879 "num_blocks": 16384, 00:05:08.879 "uuid": "5ba82ec5-6fbb-420c-874c-3beb67cca339", 00:05:08.879 "assigned_rate_limits": { 00:05:08.879 "rw_ios_per_sec": 0, 00:05:08.879 "rw_mbytes_per_sec": 0, 00:05:08.879 "r_mbytes_per_sec": 0, 00:05:08.879 "w_mbytes_per_sec": 0 00:05:08.879 }, 00:05:08.879 "claimed": true, 00:05:08.879 "claim_type": "exclusive_write", 00:05:08.879 "zoned": false, 00:05:08.879 "supported_io_types": { 00:05:08.879 "read": true, 00:05:08.879 "write": true, 00:05:08.879 "unmap": true, 00:05:08.879 "flush": true, 00:05:08.879 "reset": true, 00:05:08.879 "nvme_admin": false, 00:05:08.879 "nvme_io": false, 00:05:08.879 "nvme_io_md": false, 00:05:08.879 "write_zeroes": true, 00:05:08.879 "zcopy": true, 00:05:08.879 "get_zone_info": false, 00:05:08.879 "zone_management": false, 00:05:08.879 "zone_append": false, 00:05:08.879 "compare": false, 00:05:08.879 "compare_and_write": false, 00:05:08.879 "abort": true, 00:05:08.879 "seek_hole": false, 00:05:08.879 "seek_data": false, 00:05:08.879 "copy": true, 00:05:08.879 "nvme_iov_md": false 00:05:08.879 }, 00:05:08.879 "memory_domains": [ 00:05:08.879 { 00:05:08.879 "dma_device_id": "system", 00:05:08.879 "dma_device_type": 1 00:05:08.879 }, 00:05:08.879 { 00:05:08.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.879 "dma_device_type": 2 00:05:08.879 } 00:05:08.879 ], 00:05:08.879 "driver_specific": {} 00:05:08.879 }, 00:05:08.879 { 00:05:08.879 "name": "Passthru0", 00:05:08.879 "aliases": [ 00:05:08.879 "ab4c15b2-b05c-5523-a883-066ea0687da9" 00:05:08.879 ], 00:05:08.879 "product_name": "passthru", 00:05:08.879 "block_size": 512, 00:05:08.879 "num_blocks": 16384, 00:05:08.879 "uuid": "ab4c15b2-b05c-5523-a883-066ea0687da9", 00:05:08.879 "assigned_rate_limits": { 00:05:08.879 "rw_ios_per_sec": 0, 00:05:08.879 "rw_mbytes_per_sec": 0, 00:05:08.879 "r_mbytes_per_sec": 0, 00:05:08.879 "w_mbytes_per_sec": 0 00:05:08.879 }, 00:05:08.879 "claimed": false, 00:05:08.879 "zoned": false, 00:05:08.879 "supported_io_types": { 00:05:08.879 "read": true, 00:05:08.879 "write": true, 00:05:08.879 "unmap": true, 00:05:08.879 "flush": true, 00:05:08.879 "reset": true, 00:05:08.879 "nvme_admin": false, 00:05:08.879 "nvme_io": false, 00:05:08.879 "nvme_io_md": false, 00:05:08.879 "write_zeroes": true, 00:05:08.879 "zcopy": true, 00:05:08.879 "get_zone_info": false, 00:05:08.879 "zone_management": false, 00:05:08.879 "zone_append": false, 00:05:08.879 "compare": false, 00:05:08.879 "compare_and_write": false, 00:05:08.879 "abort": true, 00:05:08.879 "seek_hole": false, 00:05:08.879 "seek_data": false, 00:05:08.879 "copy": true, 00:05:08.879 "nvme_iov_md": false 00:05:08.879 }, 00:05:08.879 "memory_domains": [ 00:05:08.879 { 00:05:08.879 "dma_device_id": "system", 00:05:08.879 "dma_device_type": 1 00:05:08.879 }, 00:05:08.879 { 00:05:08.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.880 "dma_device_type": 2 00:05:08.880 } 00:05:08.880 ], 00:05:08.880 "driver_specific": { 00:05:08.880 "passthru": { 00:05:08.880 "name": "Passthru0", 00:05:08.880 "base_bdev_name": "Malloc2" 00:05:08.880 } 00:05:08.880 } 00:05:08.880 } 00:05:08.880 ]' 00:05:08.880 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:08.880 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:08.880 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:08.880 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.880 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.880 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.880 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:08.880 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.880 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.880 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.880 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:08.880 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.880 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.880 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.880 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:08.880 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:09.139 15:32:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:09.139 00:05:09.139 real 0m0.322s 00:05:09.139 user 0m0.182s 00:05:09.139 sys 0m0.046s 00:05:09.139 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.139 15:32:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.139 ************************************ 00:05:09.139 END TEST rpc_daemon_integrity 00:05:09.139 ************************************ 00:05:09.139 15:32:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:09.139 15:32:07 rpc -- rpc/rpc.sh@84 -- # killprocess 56822 00:05:09.139 15:32:07 rpc -- common/autotest_common.sh@954 -- # '[' -z 56822 ']' 00:05:09.139 15:32:07 rpc -- common/autotest_common.sh@958 -- # kill -0 56822 00:05:09.139 15:32:07 rpc -- common/autotest_common.sh@959 -- # uname 00:05:09.139 15:32:07 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.139 15:32:07 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56822 00:05:09.139 15:32:07 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.139 15:32:07 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.139 killing process with pid 56822 00:05:09.139 15:32:07 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56822' 00:05:09.139 15:32:07 rpc -- common/autotest_common.sh@973 -- # kill 56822 00:05:09.139 15:32:07 rpc -- common/autotest_common.sh@978 -- # wait 56822 00:05:11.679 00:05:11.679 real 0m5.040s 00:05:11.679 user 0m5.614s 00:05:11.679 sys 0m0.848s 00:05:11.679 15:32:09 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.679 15:32:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.679 ************************************ 00:05:11.679 END TEST rpc 00:05:11.679 ************************************ 00:05:11.679 15:32:09 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:11.679 15:32:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.679 15:32:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.679 15:32:09 -- common/autotest_common.sh@10 -- # set +x 00:05:11.679 ************************************ 00:05:11.679 START TEST skip_rpc 00:05:11.680 ************************************ 00:05:11.680 15:32:09 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:11.680 * Looking for test storage... 00:05:11.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:11.680 15:32:10 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:11.680 15:32:10 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:11.680 15:32:10 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:11.680 15:32:10 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.680 15:32:10 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:11.680 15:32:10 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.680 15:32:10 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:11.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.680 --rc genhtml_branch_coverage=1 00:05:11.680 --rc genhtml_function_coverage=1 00:05:11.680 --rc genhtml_legend=1 00:05:11.680 --rc geninfo_all_blocks=1 00:05:11.680 --rc geninfo_unexecuted_blocks=1 00:05:11.680 00:05:11.680 ' 00:05:11.680 15:32:10 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:11.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.680 --rc genhtml_branch_coverage=1 00:05:11.680 --rc genhtml_function_coverage=1 00:05:11.680 --rc genhtml_legend=1 00:05:11.680 --rc geninfo_all_blocks=1 00:05:11.680 --rc geninfo_unexecuted_blocks=1 00:05:11.680 00:05:11.680 ' 00:05:11.680 15:32:10 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:11.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.680 --rc genhtml_branch_coverage=1 00:05:11.680 --rc genhtml_function_coverage=1 00:05:11.680 --rc genhtml_legend=1 00:05:11.680 --rc geninfo_all_blocks=1 00:05:11.680 --rc geninfo_unexecuted_blocks=1 00:05:11.680 00:05:11.680 ' 00:05:11.680 15:32:10 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:11.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.680 --rc genhtml_branch_coverage=1 00:05:11.680 --rc genhtml_function_coverage=1 00:05:11.680 --rc genhtml_legend=1 00:05:11.680 --rc geninfo_all_blocks=1 00:05:11.680 --rc geninfo_unexecuted_blocks=1 00:05:11.680 00:05:11.680 ' 00:05:11.680 15:32:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:11.680 15:32:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:11.680 15:32:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:11.680 15:32:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.680 15:32:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.680 15:32:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.680 ************************************ 00:05:11.680 START TEST skip_rpc 00:05:11.680 ************************************ 00:05:11.680 15:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:11.680 15:32:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57051 00:05:11.680 15:32:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:11.680 15:32:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.680 15:32:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:11.680 [2024-11-25 15:32:10.305480] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:05:11.680 [2024-11-25 15:32:10.305608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57051 ] 00:05:11.939 [2024-11-25 15:32:10.479080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.939 [2024-11-25 15:32:10.583628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57051 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57051 ']' 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57051 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57051 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.214 killing process with pid 57051 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57051' 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57051 00:05:17.214 15:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57051 00:05:19.122 00:05:19.122 real 0m7.298s 00:05:19.122 user 0m6.865s 00:05:19.122 sys 0m0.357s 00:05:19.122 15:32:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.122 15:32:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.122 ************************************ 00:05:19.122 END TEST skip_rpc 00:05:19.122 ************************************ 00:05:19.122 15:32:17 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:19.122 15:32:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.122 15:32:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.122 15:32:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.122 ************************************ 00:05:19.122 START TEST skip_rpc_with_json 00:05:19.122 ************************************ 00:05:19.122 15:32:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:19.122 15:32:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:19.122 15:32:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57156 00:05:19.122 15:32:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.122 15:32:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.122 15:32:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57156 00:05:19.122 15:32:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57156 ']' 00:05:19.122 15:32:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.122 15:32:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.122 15:32:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.122 15:32:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.122 15:32:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.122 [2024-11-25 15:32:17.673394] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:05:19.122 [2024-11-25 15:32:17.673530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57156 ] 00:05:19.382 [2024-11-25 15:32:17.848691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.382 [2024-11-25 15:32:17.952269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.322 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.322 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:20.322 15:32:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:20.322 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.322 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.322 [2024-11-25 15:32:18.760582] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:20.322 request: 00:05:20.322 { 00:05:20.322 "trtype": "tcp", 00:05:20.322 "method": "nvmf_get_transports", 00:05:20.322 "req_id": 1 00:05:20.322 } 00:05:20.322 Got JSON-RPC error response 00:05:20.322 response: 00:05:20.322 { 00:05:20.322 "code": -19, 00:05:20.322 "message": "No such device" 00:05:20.322 } 00:05:20.322 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:20.323 15:32:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:20.323 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.323 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.323 [2024-11-25 15:32:18.772678] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.323 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.323 15:32:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:20.323 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.323 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.323 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.323 15:32:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:20.323 { 00:05:20.323 "subsystems": [ 00:05:20.323 { 00:05:20.323 "subsystem": "fsdev", 00:05:20.323 "config": [ 00:05:20.323 { 00:05:20.323 "method": "fsdev_set_opts", 00:05:20.323 "params": { 00:05:20.323 "fsdev_io_pool_size": 65535, 00:05:20.323 "fsdev_io_cache_size": 256 00:05:20.323 } 00:05:20.323 } 00:05:20.323 ] 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "subsystem": "keyring", 00:05:20.323 "config": [] 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "subsystem": "iobuf", 00:05:20.323 "config": [ 00:05:20.323 { 00:05:20.323 "method": "iobuf_set_options", 00:05:20.323 "params": { 00:05:20.323 "small_pool_count": 8192, 00:05:20.323 "large_pool_count": 1024, 00:05:20.323 "small_bufsize": 8192, 00:05:20.323 "large_bufsize": 135168, 00:05:20.323 "enable_numa": false 00:05:20.323 } 00:05:20.323 } 00:05:20.323 ] 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "subsystem": "sock", 00:05:20.323 "config": [ 00:05:20.323 { 00:05:20.323 "method": "sock_set_default_impl", 00:05:20.323 "params": { 00:05:20.323 "impl_name": "posix" 00:05:20.323 } 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "method": "sock_impl_set_options", 00:05:20.323 "params": { 00:05:20.323 "impl_name": "ssl", 00:05:20.323 "recv_buf_size": 4096, 00:05:20.323 "send_buf_size": 4096, 00:05:20.323 "enable_recv_pipe": true, 00:05:20.323 "enable_quickack": false, 00:05:20.323 "enable_placement_id": 0, 00:05:20.323 "enable_zerocopy_send_server": true, 00:05:20.323 "enable_zerocopy_send_client": false, 00:05:20.323 "zerocopy_threshold": 0, 00:05:20.323 "tls_version": 0, 00:05:20.323 "enable_ktls": false 00:05:20.323 } 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "method": "sock_impl_set_options", 00:05:20.323 "params": { 00:05:20.323 "impl_name": "posix", 00:05:20.323 "recv_buf_size": 2097152, 00:05:20.323 "send_buf_size": 2097152, 00:05:20.323 "enable_recv_pipe": true, 00:05:20.323 "enable_quickack": false, 00:05:20.323 "enable_placement_id": 0, 00:05:20.323 "enable_zerocopy_send_server": true, 00:05:20.323 "enable_zerocopy_send_client": false, 00:05:20.323 "zerocopy_threshold": 0, 00:05:20.323 "tls_version": 0, 00:05:20.323 "enable_ktls": false 00:05:20.323 } 00:05:20.323 } 00:05:20.323 ] 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "subsystem": "vmd", 00:05:20.323 "config": [] 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "subsystem": "accel", 00:05:20.323 "config": [ 00:05:20.323 { 00:05:20.323 "method": "accel_set_options", 00:05:20.323 "params": { 00:05:20.323 "small_cache_size": 128, 00:05:20.323 "large_cache_size": 16, 00:05:20.323 "task_count": 2048, 00:05:20.323 "sequence_count": 2048, 00:05:20.323 "buf_count": 2048 00:05:20.323 } 00:05:20.323 } 00:05:20.323 ] 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "subsystem": "bdev", 00:05:20.323 "config": [ 00:05:20.323 { 00:05:20.323 "method": "bdev_set_options", 00:05:20.323 "params": { 00:05:20.323 "bdev_io_pool_size": 65535, 00:05:20.323 "bdev_io_cache_size": 256, 00:05:20.323 "bdev_auto_examine": true, 00:05:20.323 "iobuf_small_cache_size": 128, 00:05:20.323 "iobuf_large_cache_size": 16 00:05:20.323 } 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "method": "bdev_raid_set_options", 00:05:20.323 "params": { 00:05:20.323 "process_window_size_kb": 1024, 00:05:20.323 "process_max_bandwidth_mb_sec": 0 00:05:20.323 } 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "method": "bdev_iscsi_set_options", 00:05:20.323 "params": { 00:05:20.323 "timeout_sec": 30 00:05:20.323 } 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "method": "bdev_nvme_set_options", 00:05:20.323 "params": { 00:05:20.323 "action_on_timeout": "none", 00:05:20.323 "timeout_us": 0, 00:05:20.323 "timeout_admin_us": 0, 00:05:20.323 "keep_alive_timeout_ms": 10000, 00:05:20.323 "arbitration_burst": 0, 00:05:20.323 "low_priority_weight": 0, 00:05:20.323 "medium_priority_weight": 0, 00:05:20.323 "high_priority_weight": 0, 00:05:20.323 "nvme_adminq_poll_period_us": 10000, 00:05:20.323 "nvme_ioq_poll_period_us": 0, 00:05:20.323 "io_queue_requests": 0, 00:05:20.323 "delay_cmd_submit": true, 00:05:20.323 "transport_retry_count": 4, 00:05:20.323 "bdev_retry_count": 3, 00:05:20.323 "transport_ack_timeout": 0, 00:05:20.323 "ctrlr_loss_timeout_sec": 0, 00:05:20.323 "reconnect_delay_sec": 0, 00:05:20.323 "fast_io_fail_timeout_sec": 0, 00:05:20.323 "disable_auto_failback": false, 00:05:20.323 "generate_uuids": false, 00:05:20.323 "transport_tos": 0, 00:05:20.323 "nvme_error_stat": false, 00:05:20.323 "rdma_srq_size": 0, 00:05:20.323 "io_path_stat": false, 00:05:20.323 "allow_accel_sequence": false, 00:05:20.323 "rdma_max_cq_size": 0, 00:05:20.323 "rdma_cm_event_timeout_ms": 0, 00:05:20.323 "dhchap_digests": [ 00:05:20.323 "sha256", 00:05:20.323 "sha384", 00:05:20.323 "sha512" 00:05:20.323 ], 00:05:20.323 "dhchap_dhgroups": [ 00:05:20.323 "null", 00:05:20.323 "ffdhe2048", 00:05:20.323 "ffdhe3072", 00:05:20.323 "ffdhe4096", 00:05:20.323 "ffdhe6144", 00:05:20.323 "ffdhe8192" 00:05:20.323 ] 00:05:20.323 } 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "method": "bdev_nvme_set_hotplug", 00:05:20.323 "params": { 00:05:20.323 "period_us": 100000, 00:05:20.323 "enable": false 00:05:20.323 } 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "method": "bdev_wait_for_examine" 00:05:20.323 } 00:05:20.323 ] 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "subsystem": "scsi", 00:05:20.323 "config": null 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "subsystem": "scheduler", 00:05:20.323 "config": [ 00:05:20.323 { 00:05:20.323 "method": "framework_set_scheduler", 00:05:20.323 "params": { 00:05:20.323 "name": "static" 00:05:20.323 } 00:05:20.323 } 00:05:20.323 ] 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "subsystem": "vhost_scsi", 00:05:20.323 "config": [] 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "subsystem": "vhost_blk", 00:05:20.323 "config": [] 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "subsystem": "ublk", 00:05:20.323 "config": [] 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "subsystem": "nbd", 00:05:20.323 "config": [] 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "subsystem": "nvmf", 00:05:20.323 "config": [ 00:05:20.323 { 00:05:20.323 "method": "nvmf_set_config", 00:05:20.323 "params": { 00:05:20.323 "discovery_filter": "match_any", 00:05:20.323 "admin_cmd_passthru": { 00:05:20.323 "identify_ctrlr": false 00:05:20.323 }, 00:05:20.323 "dhchap_digests": [ 00:05:20.323 "sha256", 00:05:20.323 "sha384", 00:05:20.323 "sha512" 00:05:20.323 ], 00:05:20.323 "dhchap_dhgroups": [ 00:05:20.323 "null", 00:05:20.323 "ffdhe2048", 00:05:20.323 "ffdhe3072", 00:05:20.323 "ffdhe4096", 00:05:20.323 "ffdhe6144", 00:05:20.323 "ffdhe8192" 00:05:20.323 ] 00:05:20.323 } 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "method": "nvmf_set_max_subsystems", 00:05:20.323 "params": { 00:05:20.323 "max_subsystems": 1024 00:05:20.323 } 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "method": "nvmf_set_crdt", 00:05:20.323 "params": { 00:05:20.323 "crdt1": 0, 00:05:20.323 "crdt2": 0, 00:05:20.323 "crdt3": 0 00:05:20.323 } 00:05:20.323 }, 00:05:20.323 { 00:05:20.323 "method": "nvmf_create_transport", 00:05:20.323 "params": { 00:05:20.323 "trtype": "TCP", 00:05:20.323 "max_queue_depth": 128, 00:05:20.323 "max_io_qpairs_per_ctrlr": 127, 00:05:20.323 "in_capsule_data_size": 4096, 00:05:20.323 "max_io_size": 131072, 00:05:20.323 "io_unit_size": 131072, 00:05:20.323 "max_aq_depth": 128, 00:05:20.323 "num_shared_buffers": 511, 00:05:20.323 "buf_cache_size": 4294967295, 00:05:20.323 "dif_insert_or_strip": false, 00:05:20.323 "zcopy": false, 00:05:20.323 "c2h_success": true, 00:05:20.323 "sock_priority": 0, 00:05:20.323 "abort_timeout_sec": 1, 00:05:20.323 "ack_timeout": 0, 00:05:20.323 "data_wr_pool_size": 0 00:05:20.323 } 00:05:20.324 } 00:05:20.324 ] 00:05:20.324 }, 00:05:20.324 { 00:05:20.324 "subsystem": "iscsi", 00:05:20.324 "config": [ 00:05:20.324 { 00:05:20.324 "method": "iscsi_set_options", 00:05:20.324 "params": { 00:05:20.324 "node_base": "iqn.2016-06.io.spdk", 00:05:20.324 "max_sessions": 128, 00:05:20.324 "max_connections_per_session": 2, 00:05:20.324 "max_queue_depth": 64, 00:05:20.324 "default_time2wait": 2, 00:05:20.324 "default_time2retain": 20, 00:05:20.324 "first_burst_length": 8192, 00:05:20.324 "immediate_data": true, 00:05:20.324 "allow_duplicated_isid": false, 00:05:20.324 "error_recovery_level": 0, 00:05:20.324 "nop_timeout": 60, 00:05:20.324 "nop_in_interval": 30, 00:05:20.324 "disable_chap": false, 00:05:20.324 "require_chap": false, 00:05:20.324 "mutual_chap": false, 00:05:20.324 "chap_group": 0, 00:05:20.324 "max_large_datain_per_connection": 64, 00:05:20.324 "max_r2t_per_connection": 4, 00:05:20.324 "pdu_pool_size": 36864, 00:05:20.324 "immediate_data_pool_size": 16384, 00:05:20.324 "data_out_pool_size": 2048 00:05:20.324 } 00:05:20.324 } 00:05:20.324 ] 00:05:20.324 } 00:05:20.324 ] 00:05:20.324 } 00:05:20.324 15:32:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:20.324 15:32:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57156 00:05:20.324 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57156 ']' 00:05:20.324 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57156 00:05:20.324 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:20.324 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.324 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57156 00:05:20.324 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.324 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.324 killing process with pid 57156 00:05:20.324 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57156' 00:05:20.324 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57156 00:05:20.324 15:32:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57156 00:05:22.856 15:32:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57201 00:05:22.856 15:32:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:22.856 15:32:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:28.136 15:32:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57201 00:05:28.136 15:32:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57201 ']' 00:05:28.136 15:32:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57201 00:05:28.136 15:32:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:28.136 15:32:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.136 15:32:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57201 00:05:28.136 15:32:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.136 15:32:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.136 killing process with pid 57201 00:05:28.136 15:32:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57201' 00:05:28.136 15:32:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57201 00:05:28.136 15:32:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57201 00:05:30.075 15:32:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:30.075 15:32:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:30.075 00:05:30.075 real 0m10.956s 00:05:30.075 user 0m10.420s 00:05:30.075 sys 0m0.832s 00:05:30.075 15:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.075 15:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.075 ************************************ 00:05:30.075 END TEST skip_rpc_with_json 00:05:30.075 ************************************ 00:05:30.075 15:32:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:30.075 15:32:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.075 15:32:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.075 15:32:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.075 ************************************ 00:05:30.075 START TEST skip_rpc_with_delay 00:05:30.075 ************************************ 00:05:30.075 15:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:30.075 15:32:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.075 15:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:30.075 15:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.075 15:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.075 15:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.075 15:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.075 15:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.075 15:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.075 15:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.075 15:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.075 15:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:30.075 15:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.075 [2024-11-25 15:32:28.701826] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:30.336 15:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:30.336 15:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:30.336 15:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:30.336 15:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:30.336 00:05:30.336 real 0m0.166s 00:05:30.336 user 0m0.097s 00:05:30.336 sys 0m0.067s 00:05:30.336 15:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.336 15:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:30.336 ************************************ 00:05:30.336 END TEST skip_rpc_with_delay 00:05:30.336 ************************************ 00:05:30.336 15:32:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:30.336 15:32:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:30.336 15:32:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:30.336 15:32:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.336 15:32:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.336 15:32:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.336 ************************************ 00:05:30.336 START TEST exit_on_failed_rpc_init 00:05:30.336 ************************************ 00:05:30.336 15:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:30.336 15:32:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57340 00:05:30.336 15:32:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.336 15:32:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57340 00:05:30.336 15:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57340 ']' 00:05:30.336 15:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.336 15:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.336 15:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.336 15:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.336 15:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:30.336 [2024-11-25 15:32:28.932573] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:05:30.336 [2024-11-25 15:32:28.932689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57340 ] 00:05:30.610 [2024-11-25 15:32:29.094480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.611 [2024-11-25 15:32:29.199647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.557 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.557 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:31.557 15:32:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.557 15:32:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.558 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:31.558 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.558 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.558 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.558 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.558 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.558 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.558 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.558 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.558 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:31.558 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.558 [2024-11-25 15:32:30.122101] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:05:31.558 [2024-11-25 15:32:30.122244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57358 ] 00:05:31.817 [2024-11-25 15:32:30.291532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.817 [2024-11-25 15:32:30.399593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.817 [2024-11-25 15:32:30.399706] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:31.817 [2024-11-25 15:32:30.399719] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:31.817 [2024-11-25 15:32:30.399732] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:32.078 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:32.078 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:32.078 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:32.078 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:32.078 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:32.078 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:32.078 15:32:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:32.078 15:32:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57340 00:05:32.078 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57340 ']' 00:05:32.078 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57340 00:05:32.078 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:32.078 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.078 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57340 00:05:32.078 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.078 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.078 killing process with pid 57340 00:05:32.078 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57340' 00:05:32.078 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57340 00:05:32.078 15:32:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57340 00:05:34.618 00:05:34.618 real 0m4.107s 00:05:34.618 user 0m4.410s 00:05:34.618 sys 0m0.551s 00:05:34.618 15:32:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.618 15:32:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:34.618 ************************************ 00:05:34.618 END TEST exit_on_failed_rpc_init 00:05:34.618 ************************************ 00:05:34.618 15:32:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:34.618 00:05:34.618 real 0m23.027s 00:05:34.618 user 0m22.000s 00:05:34.618 sys 0m2.107s 00:05:34.618 15:32:33 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.618 15:32:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.618 ************************************ 00:05:34.618 END TEST skip_rpc 00:05:34.618 ************************************ 00:05:34.618 15:32:33 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:34.618 15:32:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.618 15:32:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.618 15:32:33 -- common/autotest_common.sh@10 -- # set +x 00:05:34.618 ************************************ 00:05:34.618 START TEST rpc_client 00:05:34.618 ************************************ 00:05:34.618 15:32:33 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:34.618 * Looking for test storage... 00:05:34.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:34.618 15:32:33 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.618 15:32:33 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.618 15:32:33 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.618 15:32:33 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.618 15:32:33 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:34.618 15:32:33 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.618 15:32:33 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.618 --rc genhtml_branch_coverage=1 00:05:34.618 --rc genhtml_function_coverage=1 00:05:34.618 --rc genhtml_legend=1 00:05:34.618 --rc geninfo_all_blocks=1 00:05:34.618 --rc geninfo_unexecuted_blocks=1 00:05:34.618 00:05:34.618 ' 00:05:34.618 15:32:33 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.618 --rc genhtml_branch_coverage=1 00:05:34.618 --rc genhtml_function_coverage=1 00:05:34.618 --rc genhtml_legend=1 00:05:34.618 --rc geninfo_all_blocks=1 00:05:34.618 --rc geninfo_unexecuted_blocks=1 00:05:34.618 00:05:34.619 ' 00:05:34.619 15:32:33 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.619 --rc genhtml_branch_coverage=1 00:05:34.619 --rc genhtml_function_coverage=1 00:05:34.619 --rc genhtml_legend=1 00:05:34.619 --rc geninfo_all_blocks=1 00:05:34.619 --rc geninfo_unexecuted_blocks=1 00:05:34.619 00:05:34.619 ' 00:05:34.619 15:32:33 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.619 --rc genhtml_branch_coverage=1 00:05:34.619 --rc genhtml_function_coverage=1 00:05:34.619 --rc genhtml_legend=1 00:05:34.619 --rc geninfo_all_blocks=1 00:05:34.619 --rc geninfo_unexecuted_blocks=1 00:05:34.619 00:05:34.619 ' 00:05:34.619 15:32:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:34.879 OK 00:05:34.879 15:32:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:34.879 00:05:34.879 real 0m0.286s 00:05:34.879 user 0m0.144s 00:05:34.879 sys 0m0.157s 00:05:34.879 15:32:33 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.879 15:32:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:34.879 ************************************ 00:05:34.879 END TEST rpc_client 00:05:34.879 ************************************ 00:05:34.879 15:32:33 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:34.879 15:32:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.879 15:32:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.879 15:32:33 -- common/autotest_common.sh@10 -- # set +x 00:05:34.879 ************************************ 00:05:34.879 START TEST json_config 00:05:34.879 ************************************ 00:05:34.879 15:32:33 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:34.879 15:32:33 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.879 15:32:33 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.879 15:32:33 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.140 15:32:33 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.140 15:32:33 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.140 15:32:33 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.140 15:32:33 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.140 15:32:33 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.140 15:32:33 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.140 15:32:33 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.140 15:32:33 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.140 15:32:33 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.140 15:32:33 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.140 15:32:33 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.140 15:32:33 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.140 15:32:33 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:35.140 15:32:33 json_config -- scripts/common.sh@345 -- # : 1 00:05:35.140 15:32:33 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.140 15:32:33 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.140 15:32:33 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:35.140 15:32:33 json_config -- scripts/common.sh@353 -- # local d=1 00:05:35.140 15:32:33 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.140 15:32:33 json_config -- scripts/common.sh@355 -- # echo 1 00:05:35.140 15:32:33 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.140 15:32:33 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:35.140 15:32:33 json_config -- scripts/common.sh@353 -- # local d=2 00:05:35.140 15:32:33 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.140 15:32:33 json_config -- scripts/common.sh@355 -- # echo 2 00:05:35.140 15:32:33 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.140 15:32:33 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.140 15:32:33 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.140 15:32:33 json_config -- scripts/common.sh@368 -- # return 0 00:05:35.140 15:32:33 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.140 15:32:33 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.140 --rc genhtml_branch_coverage=1 00:05:35.140 --rc genhtml_function_coverage=1 00:05:35.140 --rc genhtml_legend=1 00:05:35.140 --rc geninfo_all_blocks=1 00:05:35.140 --rc geninfo_unexecuted_blocks=1 00:05:35.140 00:05:35.140 ' 00:05:35.141 15:32:33 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.141 --rc genhtml_branch_coverage=1 00:05:35.141 --rc genhtml_function_coverage=1 00:05:35.141 --rc genhtml_legend=1 00:05:35.141 --rc geninfo_all_blocks=1 00:05:35.141 --rc geninfo_unexecuted_blocks=1 00:05:35.141 00:05:35.141 ' 00:05:35.141 15:32:33 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.141 --rc genhtml_branch_coverage=1 00:05:35.141 --rc genhtml_function_coverage=1 00:05:35.141 --rc genhtml_legend=1 00:05:35.141 --rc geninfo_all_blocks=1 00:05:35.141 --rc geninfo_unexecuted_blocks=1 00:05:35.141 00:05:35.141 ' 00:05:35.141 15:32:33 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.141 --rc genhtml_branch_coverage=1 00:05:35.141 --rc genhtml_function_coverage=1 00:05:35.141 --rc genhtml_legend=1 00:05:35.141 --rc geninfo_all_blocks=1 00:05:35.141 --rc geninfo_unexecuted_blocks=1 00:05:35.141 00:05:35.141 ' 00:05:35.141 15:32:33 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29434f4a-7884-441f-8ea4-efd4338b5ac8 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29434f4a-7884-441f-8ea4-efd4338b5ac8 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:35.141 15:32:33 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:35.141 15:32:33 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.141 15:32:33 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.141 15:32:33 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.141 15:32:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.141 15:32:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.141 15:32:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.141 15:32:33 json_config -- paths/export.sh@5 -- # export PATH 00:05:35.141 15:32:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@51 -- # : 0 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:35.141 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:35.141 15:32:33 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:35.141 15:32:33 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:35.141 15:32:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:35.141 15:32:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:35.141 15:32:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:35.141 15:32:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:35.141 WARNING: No tests are enabled so not running JSON configuration tests 00:05:35.141 15:32:33 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:35.141 15:32:33 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:35.141 00:05:35.141 real 0m0.223s 00:05:35.141 user 0m0.138s 00:05:35.141 sys 0m0.093s 00:05:35.141 15:32:33 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.141 15:32:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.141 ************************************ 00:05:35.141 END TEST json_config 00:05:35.141 ************************************ 00:05:35.141 15:32:33 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:35.141 15:32:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.141 15:32:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.141 15:32:33 -- common/autotest_common.sh@10 -- # set +x 00:05:35.141 ************************************ 00:05:35.141 START TEST json_config_extra_key 00:05:35.141 ************************************ 00:05:35.141 15:32:33 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:35.141 15:32:33 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.141 15:32:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.141 15:32:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.402 15:32:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:35.402 15:32:33 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.402 15:32:33 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.402 --rc genhtml_branch_coverage=1 00:05:35.402 --rc genhtml_function_coverage=1 00:05:35.402 --rc genhtml_legend=1 00:05:35.402 --rc geninfo_all_blocks=1 00:05:35.402 --rc geninfo_unexecuted_blocks=1 00:05:35.402 00:05:35.402 ' 00:05:35.402 15:32:33 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.402 --rc genhtml_branch_coverage=1 00:05:35.402 --rc genhtml_function_coverage=1 00:05:35.402 --rc genhtml_legend=1 00:05:35.402 --rc geninfo_all_blocks=1 00:05:35.402 --rc geninfo_unexecuted_blocks=1 00:05:35.402 00:05:35.402 ' 00:05:35.402 15:32:33 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.402 --rc genhtml_branch_coverage=1 00:05:35.402 --rc genhtml_function_coverage=1 00:05:35.402 --rc genhtml_legend=1 00:05:35.402 --rc geninfo_all_blocks=1 00:05:35.402 --rc geninfo_unexecuted_blocks=1 00:05:35.402 00:05:35.402 ' 00:05:35.402 15:32:33 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.402 --rc genhtml_branch_coverage=1 00:05:35.402 --rc genhtml_function_coverage=1 00:05:35.402 --rc genhtml_legend=1 00:05:35.402 --rc geninfo_all_blocks=1 00:05:35.402 --rc geninfo_unexecuted_blocks=1 00:05:35.402 00:05:35.402 ' 00:05:35.402 15:32:33 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29434f4a-7884-441f-8ea4-efd4338b5ac8 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29434f4a-7884-441f-8ea4-efd4338b5ac8 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.402 15:32:33 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.402 15:32:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.402 15:32:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.402 15:32:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.402 15:32:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:35.402 15:32:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:35.402 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:35.402 15:32:33 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:35.402 15:32:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:35.402 15:32:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:35.402 15:32:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:35.402 15:32:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:35.402 15:32:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:35.402 15:32:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:35.402 15:32:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:35.402 15:32:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:35.402 15:32:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:35.402 15:32:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:35.402 INFO: launching applications... 00:05:35.402 15:32:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:35.403 15:32:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:35.403 15:32:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:35.403 15:32:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:35.403 15:32:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.403 15:32:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.403 15:32:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.403 15:32:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.403 15:32:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.403 15:32:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57568 00:05:35.403 Waiting for target to run... 00:05:35.403 15:32:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.403 15:32:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57568 /var/tmp/spdk_tgt.sock 00:05:35.403 15:32:33 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57568 ']' 00:05:35.403 15:32:33 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:35.403 15:32:33 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.403 15:32:33 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.403 15:32:33 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.403 15:32:33 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.403 15:32:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:35.403 [2024-11-25 15:32:34.018106] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:05:35.403 [2024-11-25 15:32:34.018257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57568 ] 00:05:35.974 [2024-11-25 15:32:34.398971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.974 [2024-11-25 15:32:34.499732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.544 15:32:35 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.544 15:32:35 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:36.544 00:05:36.544 15:32:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:36.544 INFO: shutting down applications... 00:05:36.544 15:32:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:36.544 15:32:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:36.544 15:32:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:36.544 15:32:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:36.544 15:32:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57568 ]] 00:05:36.544 15:32:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57568 00:05:36.544 15:32:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:36.544 15:32:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.544 15:32:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57568 00:05:36.544 15:32:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.114 15:32:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.114 15:32:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.114 15:32:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57568 00:05:37.114 15:32:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.685 15:32:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.685 15:32:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.685 15:32:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57568 00:05:37.685 15:32:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.253 15:32:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.253 15:32:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.254 15:32:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57568 00:05:38.254 15:32:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.823 15:32:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.823 15:32:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.823 15:32:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57568 00:05:38.823 15:32:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.083 15:32:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.083 15:32:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.083 15:32:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57568 00:05:39.083 15:32:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.655 15:32:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.655 15:32:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.655 15:32:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57568 00:05:39.655 15:32:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:39.655 15:32:38 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:39.655 15:32:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:39.655 SPDK target shutdown done 00:05:39.655 15:32:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:39.655 Success 00:05:39.655 15:32:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:39.655 00:05:39.655 real 0m4.517s 00:05:39.655 user 0m3.833s 00:05:39.655 sys 0m0.551s 00:05:39.655 15:32:38 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.655 15:32:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:39.655 ************************************ 00:05:39.655 END TEST json_config_extra_key 00:05:39.655 ************************************ 00:05:39.655 15:32:38 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:39.655 15:32:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.655 15:32:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.655 15:32:38 -- common/autotest_common.sh@10 -- # set +x 00:05:39.655 ************************************ 00:05:39.655 START TEST alias_rpc 00:05:39.655 ************************************ 00:05:39.655 15:32:38 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:39.919 * Looking for test storage... 00:05:39.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:39.919 15:32:38 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.919 15:32:38 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.919 15:32:38 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.919 15:32:38 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.919 15:32:38 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:39.919 15:32:38 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.919 15:32:38 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.919 --rc genhtml_branch_coverage=1 00:05:39.919 --rc genhtml_function_coverage=1 00:05:39.919 --rc genhtml_legend=1 00:05:39.919 --rc geninfo_all_blocks=1 00:05:39.919 --rc geninfo_unexecuted_blocks=1 00:05:39.919 00:05:39.919 ' 00:05:39.919 15:32:38 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.919 --rc genhtml_branch_coverage=1 00:05:39.919 --rc genhtml_function_coverage=1 00:05:39.919 --rc genhtml_legend=1 00:05:39.919 --rc geninfo_all_blocks=1 00:05:39.919 --rc geninfo_unexecuted_blocks=1 00:05:39.919 00:05:39.919 ' 00:05:39.919 15:32:38 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.919 --rc genhtml_branch_coverage=1 00:05:39.919 --rc genhtml_function_coverage=1 00:05:39.919 --rc genhtml_legend=1 00:05:39.919 --rc geninfo_all_blocks=1 00:05:39.919 --rc geninfo_unexecuted_blocks=1 00:05:39.919 00:05:39.919 ' 00:05:39.919 15:32:38 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.919 --rc genhtml_branch_coverage=1 00:05:39.919 --rc genhtml_function_coverage=1 00:05:39.919 --rc genhtml_legend=1 00:05:39.919 --rc geninfo_all_blocks=1 00:05:39.919 --rc geninfo_unexecuted_blocks=1 00:05:39.919 00:05:39.919 ' 00:05:39.919 15:32:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:39.919 15:32:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57674 00:05:39.919 15:32:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.919 15:32:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57674 00:05:39.919 15:32:38 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57674 ']' 00:05:39.919 15:32:38 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.919 15:32:38 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.919 15:32:38 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.919 15:32:38 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.919 15:32:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.919 [2024-11-25 15:32:38.597067] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:05:39.919 [2024-11-25 15:32:38.597204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57674 ] 00:05:40.179 [2024-11-25 15:32:38.770498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.439 [2024-11-25 15:32:38.880439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.383 15:32:39 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.383 15:32:39 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:41.383 15:32:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:41.383 15:32:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57674 00:05:41.383 15:32:39 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57674 ']' 00:05:41.383 15:32:39 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57674 00:05:41.383 15:32:39 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:41.383 15:32:39 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.383 15:32:39 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57674 00:05:41.383 15:32:39 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.383 15:32:39 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.383 killing process with pid 57674 00:05:41.383 15:32:39 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57674' 00:05:41.383 15:32:39 alias_rpc -- common/autotest_common.sh@973 -- # kill 57674 00:05:41.383 15:32:39 alias_rpc -- common/autotest_common.sh@978 -- # wait 57674 00:05:43.923 00:05:43.923 real 0m3.935s 00:05:43.923 user 0m3.909s 00:05:43.923 sys 0m0.544s 00:05:43.923 15:32:42 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.923 15:32:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.923 ************************************ 00:05:43.923 END TEST alias_rpc 00:05:43.923 ************************************ 00:05:43.923 15:32:42 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:43.923 15:32:42 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:43.923 15:32:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.923 15:32:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.923 15:32:42 -- common/autotest_common.sh@10 -- # set +x 00:05:43.923 ************************************ 00:05:43.923 START TEST spdkcli_tcp 00:05:43.923 ************************************ 00:05:43.923 15:32:42 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:43.923 * Looking for test storage... 00:05:43.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:43.923 15:32:42 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.923 15:32:42 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.923 15:32:42 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.923 15:32:42 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.923 15:32:42 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:43.923 15:32:42 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.923 15:32:42 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.923 --rc genhtml_branch_coverage=1 00:05:43.923 --rc genhtml_function_coverage=1 00:05:43.923 --rc genhtml_legend=1 00:05:43.923 --rc geninfo_all_blocks=1 00:05:43.923 --rc geninfo_unexecuted_blocks=1 00:05:43.923 00:05:43.923 ' 00:05:43.923 15:32:42 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.923 --rc genhtml_branch_coverage=1 00:05:43.923 --rc genhtml_function_coverage=1 00:05:43.923 --rc genhtml_legend=1 00:05:43.923 --rc geninfo_all_blocks=1 00:05:43.923 --rc geninfo_unexecuted_blocks=1 00:05:43.923 00:05:43.923 ' 00:05:43.923 15:32:42 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.923 --rc genhtml_branch_coverage=1 00:05:43.923 --rc genhtml_function_coverage=1 00:05:43.923 --rc genhtml_legend=1 00:05:43.923 --rc geninfo_all_blocks=1 00:05:43.923 --rc geninfo_unexecuted_blocks=1 00:05:43.923 00:05:43.923 ' 00:05:43.923 15:32:42 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.923 --rc genhtml_branch_coverage=1 00:05:43.923 --rc genhtml_function_coverage=1 00:05:43.923 --rc genhtml_legend=1 00:05:43.923 --rc geninfo_all_blocks=1 00:05:43.923 --rc geninfo_unexecuted_blocks=1 00:05:43.923 00:05:43.923 ' 00:05:43.923 15:32:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:43.923 15:32:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:43.923 15:32:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:43.923 15:32:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:43.924 15:32:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:43.924 15:32:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:43.924 15:32:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:43.924 15:32:42 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.924 15:32:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.924 15:32:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57781 00:05:43.924 15:32:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:43.924 15:32:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57781 00:05:43.924 15:32:42 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57781 ']' 00:05:43.924 15:32:42 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.924 15:32:42 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.924 15:32:42 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.924 15:32:42 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.924 15:32:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.924 [2024-11-25 15:32:42.598574] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:05:43.924 [2024-11-25 15:32:42.598720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57781 ] 00:05:44.184 [2024-11-25 15:32:42.773588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.444 [2024-11-25 15:32:42.887000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.444 [2024-11-25 15:32:42.887083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.386 15:32:43 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.386 15:32:43 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:45.386 15:32:43 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57798 00:05:45.386 15:32:43 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:45.386 15:32:43 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:45.386 [ 00:05:45.386 "bdev_malloc_delete", 00:05:45.386 "bdev_malloc_create", 00:05:45.386 "bdev_null_resize", 00:05:45.386 "bdev_null_delete", 00:05:45.386 "bdev_null_create", 00:05:45.386 "bdev_nvme_cuse_unregister", 00:05:45.386 "bdev_nvme_cuse_register", 00:05:45.386 "bdev_opal_new_user", 00:05:45.386 "bdev_opal_set_lock_state", 00:05:45.386 "bdev_opal_delete", 00:05:45.386 "bdev_opal_get_info", 00:05:45.386 "bdev_opal_create", 00:05:45.386 "bdev_nvme_opal_revert", 00:05:45.386 "bdev_nvme_opal_init", 00:05:45.386 "bdev_nvme_send_cmd", 00:05:45.386 "bdev_nvme_set_keys", 00:05:45.386 "bdev_nvme_get_path_iostat", 00:05:45.386 "bdev_nvme_get_mdns_discovery_info", 00:05:45.386 "bdev_nvme_stop_mdns_discovery", 00:05:45.386 "bdev_nvme_start_mdns_discovery", 00:05:45.386 "bdev_nvme_set_multipath_policy", 00:05:45.386 "bdev_nvme_set_preferred_path", 00:05:45.386 "bdev_nvme_get_io_paths", 00:05:45.386 "bdev_nvme_remove_error_injection", 00:05:45.386 "bdev_nvme_add_error_injection", 00:05:45.386 "bdev_nvme_get_discovery_info", 00:05:45.386 "bdev_nvme_stop_discovery", 00:05:45.386 "bdev_nvme_start_discovery", 00:05:45.386 "bdev_nvme_get_controller_health_info", 00:05:45.386 "bdev_nvme_disable_controller", 00:05:45.386 "bdev_nvme_enable_controller", 00:05:45.386 "bdev_nvme_reset_controller", 00:05:45.386 "bdev_nvme_get_transport_statistics", 00:05:45.386 "bdev_nvme_apply_firmware", 00:05:45.386 "bdev_nvme_detach_controller", 00:05:45.386 "bdev_nvme_get_controllers", 00:05:45.386 "bdev_nvme_attach_controller", 00:05:45.386 "bdev_nvme_set_hotplug", 00:05:45.386 "bdev_nvme_set_options", 00:05:45.386 "bdev_passthru_delete", 00:05:45.386 "bdev_passthru_create", 00:05:45.386 "bdev_lvol_set_parent_bdev", 00:05:45.386 "bdev_lvol_set_parent", 00:05:45.386 "bdev_lvol_check_shallow_copy", 00:05:45.386 "bdev_lvol_start_shallow_copy", 00:05:45.386 "bdev_lvol_grow_lvstore", 00:05:45.386 "bdev_lvol_get_lvols", 00:05:45.386 "bdev_lvol_get_lvstores", 00:05:45.386 "bdev_lvol_delete", 00:05:45.386 "bdev_lvol_set_read_only", 00:05:45.386 "bdev_lvol_resize", 00:05:45.386 "bdev_lvol_decouple_parent", 00:05:45.386 "bdev_lvol_inflate", 00:05:45.386 "bdev_lvol_rename", 00:05:45.386 "bdev_lvol_clone_bdev", 00:05:45.386 "bdev_lvol_clone", 00:05:45.386 "bdev_lvol_snapshot", 00:05:45.386 "bdev_lvol_create", 00:05:45.386 "bdev_lvol_delete_lvstore", 00:05:45.386 "bdev_lvol_rename_lvstore", 00:05:45.386 "bdev_lvol_create_lvstore", 00:05:45.386 "bdev_raid_set_options", 00:05:45.386 "bdev_raid_remove_base_bdev", 00:05:45.386 "bdev_raid_add_base_bdev", 00:05:45.386 "bdev_raid_delete", 00:05:45.386 "bdev_raid_create", 00:05:45.386 "bdev_raid_get_bdevs", 00:05:45.386 "bdev_error_inject_error", 00:05:45.386 "bdev_error_delete", 00:05:45.386 "bdev_error_create", 00:05:45.386 "bdev_split_delete", 00:05:45.386 "bdev_split_create", 00:05:45.386 "bdev_delay_delete", 00:05:45.386 "bdev_delay_create", 00:05:45.386 "bdev_delay_update_latency", 00:05:45.386 "bdev_zone_block_delete", 00:05:45.386 "bdev_zone_block_create", 00:05:45.386 "blobfs_create", 00:05:45.386 "blobfs_detect", 00:05:45.386 "blobfs_set_cache_size", 00:05:45.386 "bdev_aio_delete", 00:05:45.386 "bdev_aio_rescan", 00:05:45.386 "bdev_aio_create", 00:05:45.386 "bdev_ftl_set_property", 00:05:45.386 "bdev_ftl_get_properties", 00:05:45.386 "bdev_ftl_get_stats", 00:05:45.386 "bdev_ftl_unmap", 00:05:45.386 "bdev_ftl_unload", 00:05:45.386 "bdev_ftl_delete", 00:05:45.386 "bdev_ftl_load", 00:05:45.386 "bdev_ftl_create", 00:05:45.386 "bdev_virtio_attach_controller", 00:05:45.386 "bdev_virtio_scsi_get_devices", 00:05:45.386 "bdev_virtio_detach_controller", 00:05:45.386 "bdev_virtio_blk_set_hotplug", 00:05:45.386 "bdev_iscsi_delete", 00:05:45.386 "bdev_iscsi_create", 00:05:45.386 "bdev_iscsi_set_options", 00:05:45.386 "accel_error_inject_error", 00:05:45.386 "ioat_scan_accel_module", 00:05:45.386 "dsa_scan_accel_module", 00:05:45.386 "iaa_scan_accel_module", 00:05:45.386 "keyring_file_remove_key", 00:05:45.386 "keyring_file_add_key", 00:05:45.386 "keyring_linux_set_options", 00:05:45.386 "fsdev_aio_delete", 00:05:45.386 "fsdev_aio_create", 00:05:45.386 "iscsi_get_histogram", 00:05:45.386 "iscsi_enable_histogram", 00:05:45.386 "iscsi_set_options", 00:05:45.386 "iscsi_get_auth_groups", 00:05:45.386 "iscsi_auth_group_remove_secret", 00:05:45.386 "iscsi_auth_group_add_secret", 00:05:45.386 "iscsi_delete_auth_group", 00:05:45.386 "iscsi_create_auth_group", 00:05:45.386 "iscsi_set_discovery_auth", 00:05:45.386 "iscsi_get_options", 00:05:45.386 "iscsi_target_node_request_logout", 00:05:45.386 "iscsi_target_node_set_redirect", 00:05:45.386 "iscsi_target_node_set_auth", 00:05:45.386 "iscsi_target_node_add_lun", 00:05:45.386 "iscsi_get_stats", 00:05:45.386 "iscsi_get_connections", 00:05:45.386 "iscsi_portal_group_set_auth", 00:05:45.386 "iscsi_start_portal_group", 00:05:45.386 "iscsi_delete_portal_group", 00:05:45.386 "iscsi_create_portal_group", 00:05:45.386 "iscsi_get_portal_groups", 00:05:45.386 "iscsi_delete_target_node", 00:05:45.386 "iscsi_target_node_remove_pg_ig_maps", 00:05:45.386 "iscsi_target_node_add_pg_ig_maps", 00:05:45.386 "iscsi_create_target_node", 00:05:45.386 "iscsi_get_target_nodes", 00:05:45.386 "iscsi_delete_initiator_group", 00:05:45.386 "iscsi_initiator_group_remove_initiators", 00:05:45.386 "iscsi_initiator_group_add_initiators", 00:05:45.386 "iscsi_create_initiator_group", 00:05:45.386 "iscsi_get_initiator_groups", 00:05:45.386 "nvmf_set_crdt", 00:05:45.386 "nvmf_set_config", 00:05:45.386 "nvmf_set_max_subsystems", 00:05:45.386 "nvmf_stop_mdns_prr", 00:05:45.386 "nvmf_publish_mdns_prr", 00:05:45.386 "nvmf_subsystem_get_listeners", 00:05:45.386 "nvmf_subsystem_get_qpairs", 00:05:45.386 "nvmf_subsystem_get_controllers", 00:05:45.386 "nvmf_get_stats", 00:05:45.386 "nvmf_get_transports", 00:05:45.386 "nvmf_create_transport", 00:05:45.386 "nvmf_get_targets", 00:05:45.386 "nvmf_delete_target", 00:05:45.386 "nvmf_create_target", 00:05:45.386 "nvmf_subsystem_allow_any_host", 00:05:45.387 "nvmf_subsystem_set_keys", 00:05:45.387 "nvmf_subsystem_remove_host", 00:05:45.387 "nvmf_subsystem_add_host", 00:05:45.387 "nvmf_ns_remove_host", 00:05:45.387 "nvmf_ns_add_host", 00:05:45.387 "nvmf_subsystem_remove_ns", 00:05:45.387 "nvmf_subsystem_set_ns_ana_group", 00:05:45.387 "nvmf_subsystem_add_ns", 00:05:45.387 "nvmf_subsystem_listener_set_ana_state", 00:05:45.387 "nvmf_discovery_get_referrals", 00:05:45.387 "nvmf_discovery_remove_referral", 00:05:45.387 "nvmf_discovery_add_referral", 00:05:45.387 "nvmf_subsystem_remove_listener", 00:05:45.387 "nvmf_subsystem_add_listener", 00:05:45.387 "nvmf_delete_subsystem", 00:05:45.387 "nvmf_create_subsystem", 00:05:45.387 "nvmf_get_subsystems", 00:05:45.387 "env_dpdk_get_mem_stats", 00:05:45.387 "nbd_get_disks", 00:05:45.387 "nbd_stop_disk", 00:05:45.387 "nbd_start_disk", 00:05:45.387 "ublk_recover_disk", 00:05:45.387 "ublk_get_disks", 00:05:45.387 "ublk_stop_disk", 00:05:45.387 "ublk_start_disk", 00:05:45.387 "ublk_destroy_target", 00:05:45.387 "ublk_create_target", 00:05:45.387 "virtio_blk_create_transport", 00:05:45.387 "virtio_blk_get_transports", 00:05:45.387 "vhost_controller_set_coalescing", 00:05:45.387 "vhost_get_controllers", 00:05:45.387 "vhost_delete_controller", 00:05:45.387 "vhost_create_blk_controller", 00:05:45.387 "vhost_scsi_controller_remove_target", 00:05:45.387 "vhost_scsi_controller_add_target", 00:05:45.387 "vhost_start_scsi_controller", 00:05:45.387 "vhost_create_scsi_controller", 00:05:45.387 "thread_set_cpumask", 00:05:45.387 "scheduler_set_options", 00:05:45.387 "framework_get_governor", 00:05:45.387 "framework_get_scheduler", 00:05:45.387 "framework_set_scheduler", 00:05:45.387 "framework_get_reactors", 00:05:45.387 "thread_get_io_channels", 00:05:45.387 "thread_get_pollers", 00:05:45.387 "thread_get_stats", 00:05:45.387 "framework_monitor_context_switch", 00:05:45.387 "spdk_kill_instance", 00:05:45.387 "log_enable_timestamps", 00:05:45.387 "log_get_flags", 00:05:45.387 "log_clear_flag", 00:05:45.387 "log_set_flag", 00:05:45.387 "log_get_level", 00:05:45.387 "log_set_level", 00:05:45.387 "log_get_print_level", 00:05:45.387 "log_set_print_level", 00:05:45.387 "framework_enable_cpumask_locks", 00:05:45.387 "framework_disable_cpumask_locks", 00:05:45.387 "framework_wait_init", 00:05:45.387 "framework_start_init", 00:05:45.387 "scsi_get_devices", 00:05:45.387 "bdev_get_histogram", 00:05:45.387 "bdev_enable_histogram", 00:05:45.387 "bdev_set_qos_limit", 00:05:45.387 "bdev_set_qd_sampling_period", 00:05:45.387 "bdev_get_bdevs", 00:05:45.387 "bdev_reset_iostat", 00:05:45.387 "bdev_get_iostat", 00:05:45.387 "bdev_examine", 00:05:45.387 "bdev_wait_for_examine", 00:05:45.387 "bdev_set_options", 00:05:45.387 "accel_get_stats", 00:05:45.387 "accel_set_options", 00:05:45.387 "accel_set_driver", 00:05:45.387 "accel_crypto_key_destroy", 00:05:45.387 "accel_crypto_keys_get", 00:05:45.387 "accel_crypto_key_create", 00:05:45.387 "accel_assign_opc", 00:05:45.387 "accel_get_module_info", 00:05:45.387 "accel_get_opc_assignments", 00:05:45.387 "vmd_rescan", 00:05:45.387 "vmd_remove_device", 00:05:45.387 "vmd_enable", 00:05:45.387 "sock_get_default_impl", 00:05:45.387 "sock_set_default_impl", 00:05:45.387 "sock_impl_set_options", 00:05:45.387 "sock_impl_get_options", 00:05:45.387 "iobuf_get_stats", 00:05:45.387 "iobuf_set_options", 00:05:45.387 "keyring_get_keys", 00:05:45.387 "framework_get_pci_devices", 00:05:45.387 "framework_get_config", 00:05:45.387 "framework_get_subsystems", 00:05:45.387 "fsdev_set_opts", 00:05:45.387 "fsdev_get_opts", 00:05:45.387 "trace_get_info", 00:05:45.387 "trace_get_tpoint_group_mask", 00:05:45.387 "trace_disable_tpoint_group", 00:05:45.387 "trace_enable_tpoint_group", 00:05:45.387 "trace_clear_tpoint_mask", 00:05:45.387 "trace_set_tpoint_mask", 00:05:45.387 "notify_get_notifications", 00:05:45.387 "notify_get_types", 00:05:45.387 "spdk_get_version", 00:05:45.387 "rpc_get_methods" 00:05:45.387 ] 00:05:45.387 15:32:43 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:45.387 15:32:43 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:45.387 15:32:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.387 15:32:43 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:45.387 15:32:43 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57781 00:05:45.387 15:32:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57781 ']' 00:05:45.387 15:32:43 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57781 00:05:45.387 15:32:43 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:45.387 15:32:43 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.387 15:32:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57781 00:05:45.387 15:32:44 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.387 15:32:44 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.387 killing process with pid 57781 00:05:45.387 15:32:44 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57781' 00:05:45.387 15:32:44 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57781 00:05:45.387 15:32:44 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57781 00:05:47.932 00:05:47.932 real 0m4.058s 00:05:47.932 user 0m7.263s 00:05:47.932 sys 0m0.615s 00:05:47.932 15:32:46 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.932 15:32:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.932 ************************************ 00:05:47.932 END TEST spdkcli_tcp 00:05:47.932 ************************************ 00:05:47.932 15:32:46 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.932 15:32:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.932 15:32:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.932 15:32:46 -- common/autotest_common.sh@10 -- # set +x 00:05:47.932 ************************************ 00:05:47.932 START TEST dpdk_mem_utility 00:05:47.932 ************************************ 00:05:47.932 15:32:46 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.932 * Looking for test storage... 00:05:47.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:47.932 15:32:46 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.932 15:32:46 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.932 15:32:46 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.932 15:32:46 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.932 15:32:46 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.932 15:32:46 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.932 15:32:46 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.932 15:32:46 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.932 15:32:46 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.932 15:32:46 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.932 15:32:46 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.932 15:32:46 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.932 15:32:46 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.932 15:32:46 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.932 15:32:46 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.932 15:32:46 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:47.932 15:32:46 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:47.932 15:32:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.932 15:32:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.932 15:32:46 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:48.193 15:32:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:48.193 15:32:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.193 15:32:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:48.193 15:32:46 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.193 15:32:46 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:48.193 15:32:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:48.193 15:32:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.193 15:32:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:48.193 15:32:46 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.193 15:32:46 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.193 15:32:46 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.193 15:32:46 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:48.193 15:32:46 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.193 15:32:46 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:48.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.193 --rc genhtml_branch_coverage=1 00:05:48.193 --rc genhtml_function_coverage=1 00:05:48.193 --rc genhtml_legend=1 00:05:48.193 --rc geninfo_all_blocks=1 00:05:48.193 --rc geninfo_unexecuted_blocks=1 00:05:48.193 00:05:48.193 ' 00:05:48.193 15:32:46 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:48.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.193 --rc genhtml_branch_coverage=1 00:05:48.193 --rc genhtml_function_coverage=1 00:05:48.193 --rc genhtml_legend=1 00:05:48.193 --rc geninfo_all_blocks=1 00:05:48.193 --rc geninfo_unexecuted_blocks=1 00:05:48.193 00:05:48.193 ' 00:05:48.193 15:32:46 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:48.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.193 --rc genhtml_branch_coverage=1 00:05:48.193 --rc genhtml_function_coverage=1 00:05:48.193 --rc genhtml_legend=1 00:05:48.193 --rc geninfo_all_blocks=1 00:05:48.193 --rc geninfo_unexecuted_blocks=1 00:05:48.193 00:05:48.193 ' 00:05:48.193 15:32:46 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:48.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.193 --rc genhtml_branch_coverage=1 00:05:48.193 --rc genhtml_function_coverage=1 00:05:48.193 --rc genhtml_legend=1 00:05:48.193 --rc geninfo_all_blocks=1 00:05:48.193 --rc geninfo_unexecuted_blocks=1 00:05:48.193 00:05:48.193 ' 00:05:48.193 15:32:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:48.193 15:32:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.193 15:32:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57903 00:05:48.193 15:32:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57903 00:05:48.193 15:32:46 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57903 ']' 00:05:48.193 15:32:46 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.193 15:32:46 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.193 15:32:46 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.193 15:32:46 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.193 15:32:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.193 [2024-11-25 15:32:46.716821] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:05:48.193 [2024-11-25 15:32:46.716958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57903 ] 00:05:48.453 [2024-11-25 15:32:46.891333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.453 [2024-11-25 15:32:47.001327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.395 15:32:47 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.395 15:32:47 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:49.395 15:32:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:49.395 15:32:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:49.395 15:32:47 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.395 15:32:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.395 { 00:05:49.395 "filename": "/tmp/spdk_mem_dump.txt" 00:05:49.395 } 00:05:49.395 15:32:47 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.395 15:32:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:49.395 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:49.395 1 heaps totaling size 816.000000 MiB 00:05:49.395 size: 816.000000 MiB heap id: 0 00:05:49.395 end heaps---------- 00:05:49.395 9 mempools totaling size 595.772034 MiB 00:05:49.395 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:49.395 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:49.395 size: 92.545471 MiB name: bdev_io_57903 00:05:49.395 size: 50.003479 MiB name: msgpool_57903 00:05:49.395 size: 36.509338 MiB name: fsdev_io_57903 00:05:49.395 size: 21.763794 MiB name: PDU_Pool 00:05:49.395 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:49.395 size: 4.133484 MiB name: evtpool_57903 00:05:49.395 size: 0.026123 MiB name: Session_Pool 00:05:49.395 end mempools------- 00:05:49.395 6 memzones totaling size 4.142822 MiB 00:05:49.395 size: 1.000366 MiB name: RG_ring_0_57903 00:05:49.395 size: 1.000366 MiB name: RG_ring_1_57903 00:05:49.395 size: 1.000366 MiB name: RG_ring_4_57903 00:05:49.395 size: 1.000366 MiB name: RG_ring_5_57903 00:05:49.395 size: 0.125366 MiB name: RG_ring_2_57903 00:05:49.395 size: 0.015991 MiB name: RG_ring_3_57903 00:05:49.395 end memzones------- 00:05:49.395 15:32:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:49.395 heap id: 0 total size: 816.000000 MiB number of busy elements: 306 number of free elements: 18 00:05:49.395 list of free elements. size: 16.793579 MiB 00:05:49.395 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:49.395 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:49.395 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:49.395 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:49.395 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:49.395 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:49.395 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:49.395 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:49.395 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:49.395 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:49.395 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:49.395 element at address: 0x20001ac00000 with size: 0.563904 MiB 00:05:49.395 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:49.395 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:49.395 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:49.395 element at address: 0x200012c00000 with size: 0.443481 MiB 00:05:49.395 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:49.395 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:49.395 list of standard malloc elements. size: 199.285522 MiB 00:05:49.395 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:49.395 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:49.395 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:49.395 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:49.395 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:49.395 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:49.395 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:49.395 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:49.395 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:49.395 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:49.396 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:49.396 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:49.396 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:49.397 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:49.397 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:49.397 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:49.397 list of memzone associated elements. size: 599.920898 MiB 00:05:49.397 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:49.397 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:49.397 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:49.397 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:49.397 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:49.397 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57903_0 00:05:49.397 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:49.397 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57903_0 00:05:49.397 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:49.398 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57903_0 00:05:49.398 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:49.398 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:49.398 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:49.398 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:49.398 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:49.398 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57903_0 00:05:49.398 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:49.398 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57903 00:05:49.398 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:49.398 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57903 00:05:49.398 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:49.398 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:49.398 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:49.398 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:49.398 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:49.398 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:49.398 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:49.398 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:49.398 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:49.398 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57903 00:05:49.398 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:49.398 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57903 00:05:49.398 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:49.398 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57903 00:05:49.398 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:49.398 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57903 00:05:49.398 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:49.398 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57903 00:05:49.398 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:49.398 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57903 00:05:49.398 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:49.398 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:49.398 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:49.398 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:49.398 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:49.398 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:49.398 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:49.398 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57903 00:05:49.398 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:49.398 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57903 00:05:49.398 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:49.398 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:49.398 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:49.398 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:49.398 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:49.398 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57903 00:05:49.398 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:49.398 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:49.398 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:49.398 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57903 00:05:49.398 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:49.398 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57903 00:05:49.398 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:49.398 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57903 00:05:49.398 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:49.398 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:49.398 15:32:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:49.398 15:32:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57903 00:05:49.398 15:32:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57903 ']' 00:05:49.398 15:32:47 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57903 00:05:49.398 15:32:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:49.398 15:32:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.398 15:32:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57903 00:05:49.398 15:32:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.398 15:32:47 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.398 killing process with pid 57903 00:05:49.398 15:32:47 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57903' 00:05:49.398 15:32:47 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57903 00:05:49.398 15:32:47 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57903 00:05:51.937 00:05:51.937 real 0m3.870s 00:05:51.937 user 0m3.780s 00:05:51.937 sys 0m0.540s 00:05:51.937 15:32:50 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.937 15:32:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.937 ************************************ 00:05:51.937 END TEST dpdk_mem_utility 00:05:51.937 ************************************ 00:05:51.937 15:32:50 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:51.937 15:32:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.937 15:32:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.937 15:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:51.937 ************************************ 00:05:51.937 START TEST event 00:05:51.937 ************************************ 00:05:51.937 15:32:50 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:51.937 * Looking for test storage... 00:05:51.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:51.937 15:32:50 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.937 15:32:50 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.937 15:32:50 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.937 15:32:50 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.937 15:32:50 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.937 15:32:50 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.937 15:32:50 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.937 15:32:50 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.937 15:32:50 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.937 15:32:50 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.937 15:32:50 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.937 15:32:50 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.937 15:32:50 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.937 15:32:50 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.937 15:32:50 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.937 15:32:50 event -- scripts/common.sh@344 -- # case "$op" in 00:05:51.937 15:32:50 event -- scripts/common.sh@345 -- # : 1 00:05:51.937 15:32:50 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.937 15:32:50 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.937 15:32:50 event -- scripts/common.sh@365 -- # decimal 1 00:05:51.937 15:32:50 event -- scripts/common.sh@353 -- # local d=1 00:05:51.937 15:32:50 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.937 15:32:50 event -- scripts/common.sh@355 -- # echo 1 00:05:51.937 15:32:50 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.937 15:32:50 event -- scripts/common.sh@366 -- # decimal 2 00:05:51.937 15:32:50 event -- scripts/common.sh@353 -- # local d=2 00:05:51.937 15:32:50 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.937 15:32:50 event -- scripts/common.sh@355 -- # echo 2 00:05:51.937 15:32:50 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.937 15:32:50 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.937 15:32:50 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.937 15:32:50 event -- scripts/common.sh@368 -- # return 0 00:05:51.937 15:32:50 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.937 15:32:50 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.937 --rc genhtml_branch_coverage=1 00:05:51.937 --rc genhtml_function_coverage=1 00:05:51.937 --rc genhtml_legend=1 00:05:51.937 --rc geninfo_all_blocks=1 00:05:51.937 --rc geninfo_unexecuted_blocks=1 00:05:51.937 00:05:51.937 ' 00:05:51.937 15:32:50 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.937 --rc genhtml_branch_coverage=1 00:05:51.937 --rc genhtml_function_coverage=1 00:05:51.937 --rc genhtml_legend=1 00:05:51.937 --rc geninfo_all_blocks=1 00:05:51.937 --rc geninfo_unexecuted_blocks=1 00:05:51.937 00:05:51.937 ' 00:05:51.937 15:32:50 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.937 --rc genhtml_branch_coverage=1 00:05:51.937 --rc genhtml_function_coverage=1 00:05:51.937 --rc genhtml_legend=1 00:05:51.937 --rc geninfo_all_blocks=1 00:05:51.937 --rc geninfo_unexecuted_blocks=1 00:05:51.937 00:05:51.937 ' 00:05:51.937 15:32:50 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.937 --rc genhtml_branch_coverage=1 00:05:51.937 --rc genhtml_function_coverage=1 00:05:51.937 --rc genhtml_legend=1 00:05:51.937 --rc geninfo_all_blocks=1 00:05:51.937 --rc geninfo_unexecuted_blocks=1 00:05:51.937 00:05:51.938 ' 00:05:51.938 15:32:50 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:51.938 15:32:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:51.938 15:32:50 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:51.938 15:32:50 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:51.938 15:32:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.938 15:32:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.938 ************************************ 00:05:51.938 START TEST event_perf 00:05:51.938 ************************************ 00:05:51.938 15:32:50 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:52.198 Running I/O for 1 seconds...[2024-11-25 15:32:50.621339] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:05:52.198 [2024-11-25 15:32:50.621439] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58006 ] 00:05:52.198 [2024-11-25 15:32:50.797261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.462 [2024-11-25 15:32:50.913709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.462 [2024-11-25 15:32:50.913889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.462 Running I/O for 1 seconds...[2024-11-25 15:32:50.914252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.462 [2024-11-25 15:32:50.914298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.859 00:05:53.859 lcore 0: 208589 00:05:53.859 lcore 1: 208586 00:05:53.859 lcore 2: 208588 00:05:53.859 lcore 3: 208589 00:05:53.859 done. 00:05:53.859 00:05:53.859 real 0m1.573s 00:05:53.859 user 0m4.328s 00:05:53.859 sys 0m0.124s 00:05:53.859 15:32:52 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.859 15:32:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.859 ************************************ 00:05:53.859 END TEST event_perf 00:05:53.859 ************************************ 00:05:53.859 15:32:52 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:53.859 15:32:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:53.859 15:32:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.859 15:32:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.859 ************************************ 00:05:53.859 START TEST event_reactor 00:05:53.859 ************************************ 00:05:53.859 15:32:52 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:53.859 [2024-11-25 15:32:52.253335] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:05:53.859 [2024-11-25 15:32:52.253461] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58051 ] 00:05:53.859 [2024-11-25 15:32:52.424638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.859 [2024-11-25 15:32:52.537828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.241 test_start 00:05:55.241 oneshot 00:05:55.241 tick 100 00:05:55.241 tick 100 00:05:55.241 tick 250 00:05:55.241 tick 100 00:05:55.241 tick 100 00:05:55.241 tick 250 00:05:55.241 tick 100 00:05:55.241 tick 500 00:05:55.241 tick 100 00:05:55.241 tick 100 00:05:55.241 tick 250 00:05:55.241 tick 100 00:05:55.241 tick 100 00:05:55.241 test_end 00:05:55.241 00:05:55.241 real 0m1.546s 00:05:55.241 user 0m1.348s 00:05:55.242 sys 0m0.090s 00:05:55.242 15:32:53 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.242 15:32:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:55.242 ************************************ 00:05:55.242 END TEST event_reactor 00:05:55.242 ************************************ 00:05:55.242 15:32:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.242 15:32:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:55.242 15:32:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.242 15:32:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.242 ************************************ 00:05:55.242 START TEST event_reactor_perf 00:05:55.242 ************************************ 00:05:55.242 15:32:53 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.242 [2024-11-25 15:32:53.864854] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:05:55.242 [2024-11-25 15:32:53.864961] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58082 ] 00:05:55.501 [2024-11-25 15:32:54.039460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.501 [2024-11-25 15:32:54.152608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.884 test_start 00:05:56.884 test_end 00:05:56.884 Performance: 394810 events per second 00:05:56.884 00:05:56.884 real 0m1.557s 00:05:56.884 user 0m1.356s 00:05:56.884 sys 0m0.092s 00:05:56.884 15:32:55 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.884 15:32:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:56.884 ************************************ 00:05:56.884 END TEST event_reactor_perf 00:05:56.884 ************************************ 00:05:56.884 15:32:55 event -- event/event.sh@49 -- # uname -s 00:05:56.884 15:32:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:56.884 15:32:55 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:56.884 15:32:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.884 15:32:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.884 15:32:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.884 ************************************ 00:05:56.885 START TEST event_scheduler 00:05:56.885 ************************************ 00:05:56.885 15:32:55 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:56.885 * Looking for test storage... 00:05:57.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:57.145 15:32:55 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:57.145 15:32:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:57.145 15:32:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.145 15:32:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.145 15:32:55 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:57.145 15:32:55 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.145 15:32:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:57.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.145 --rc genhtml_branch_coverage=1 00:05:57.145 --rc genhtml_function_coverage=1 00:05:57.145 --rc genhtml_legend=1 00:05:57.145 --rc geninfo_all_blocks=1 00:05:57.145 --rc geninfo_unexecuted_blocks=1 00:05:57.145 00:05:57.145 ' 00:05:57.145 15:32:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:57.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.145 --rc genhtml_branch_coverage=1 00:05:57.145 --rc genhtml_function_coverage=1 00:05:57.145 --rc genhtml_legend=1 00:05:57.145 --rc geninfo_all_blocks=1 00:05:57.145 --rc geninfo_unexecuted_blocks=1 00:05:57.145 00:05:57.145 ' 00:05:57.145 15:32:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:57.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.145 --rc genhtml_branch_coverage=1 00:05:57.145 --rc genhtml_function_coverage=1 00:05:57.145 --rc genhtml_legend=1 00:05:57.145 --rc geninfo_all_blocks=1 00:05:57.145 --rc geninfo_unexecuted_blocks=1 00:05:57.145 00:05:57.145 ' 00:05:57.145 15:32:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:57.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.145 --rc genhtml_branch_coverage=1 00:05:57.145 --rc genhtml_function_coverage=1 00:05:57.145 --rc genhtml_legend=1 00:05:57.145 --rc geninfo_all_blocks=1 00:05:57.145 --rc geninfo_unexecuted_blocks=1 00:05:57.145 00:05:57.145 ' 00:05:57.145 15:32:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:57.145 15:32:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58158 00:05:57.145 15:32:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:57.145 15:32:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.145 15:32:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58158 00:05:57.146 15:32:55 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58158 ']' 00:05:57.146 15:32:55 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.146 15:32:55 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.146 15:32:55 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.146 15:32:55 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.146 15:32:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.146 [2024-11-25 15:32:55.743629] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:05:57.146 [2024-11-25 15:32:55.743764] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58158 ] 00:05:57.406 [2024-11-25 15:32:55.899411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.406 [2024-11-25 15:32:56.012584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.406 [2024-11-25 15:32:56.012829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.406 [2024-11-25 15:32:56.012758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.406 [2024-11-25 15:32:56.012841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.977 15:32:56 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.977 15:32:56 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:57.977 15:32:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:57.977 15:32:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.977 15:32:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.977 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.977 POWER: Cannot set governor of lcore 0 to userspace 00:05:57.977 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.977 POWER: Cannot set governor of lcore 0 to performance 00:05:57.977 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.977 POWER: Cannot set governor of lcore 0 to userspace 00:05:57.977 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.977 POWER: Cannot set governor of lcore 0 to userspace 00:05:57.977 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:57.977 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:57.977 POWER: Unable to set Power Management Environment for lcore 0 00:05:57.977 [2024-11-25 15:32:56.573238] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:57.977 [2024-11-25 15:32:56.573262] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:57.977 [2024-11-25 15:32:56.573273] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:57.977 [2024-11-25 15:32:56.573292] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:57.977 [2024-11-25 15:32:56.573300] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:57.977 [2024-11-25 15:32:56.573309] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:57.977 15:32:56 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.977 15:32:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:57.977 15:32:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.977 15:32:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.237 [2024-11-25 15:32:56.874764] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:58.237 15:32:56 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.237 15:32:56 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:58.237 15:32:56 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.237 15:32:56 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.237 15:32:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.237 ************************************ 00:05:58.237 START TEST scheduler_create_thread 00:05:58.237 ************************************ 00:05:58.237 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:58.237 15:32:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:58.237 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.237 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.237 2 00:05:58.237 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.237 15:32:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:58.237 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.237 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.237 3 00:05:58.237 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.237 15:32:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:58.237 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.237 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.237 4 00:05:58.237 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.237 15:32:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:58.237 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.237 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.498 5 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.498 6 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.498 7 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.498 8 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.498 9 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.498 10 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.498 15:32:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.879 15:32:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.879 15:32:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:59.879 15:32:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:59.879 15:32:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.879 15:32:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.259 15:32:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.259 00:06:01.259 real 0m2.612s 00:06:01.259 user 0m0.026s 00:06:01.259 sys 0m0.010s 00:06:01.259 15:32:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.259 15:32:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.259 ************************************ 00:06:01.259 END TEST scheduler_create_thread 00:06:01.259 ************************************ 00:06:01.259 15:32:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:01.259 15:32:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58158 00:06:01.259 15:32:59 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58158 ']' 00:06:01.260 15:32:59 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58158 00:06:01.260 15:32:59 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:01.260 15:32:59 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.260 15:32:59 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58158 00:06:01.260 15:32:59 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:01.260 15:32:59 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:01.260 15:32:59 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58158' 00:06:01.260 killing process with pid 58158 00:06:01.260 15:32:59 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58158 00:06:01.260 15:32:59 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58158 00:06:01.519 [2024-11-25 15:32:59.978235] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:02.460 00:06:02.461 real 0m5.637s 00:06:02.461 user 0m9.683s 00:06:02.461 sys 0m0.488s 00:06:02.461 15:33:01 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.461 ************************************ 00:06:02.461 END TEST event_scheduler 00:06:02.461 ************************************ 00:06:02.461 15:33:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.461 15:33:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:02.720 15:33:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:02.720 15:33:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.720 15:33:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.720 15:33:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.720 ************************************ 00:06:02.720 START TEST app_repeat 00:06:02.720 ************************************ 00:06:02.720 15:33:01 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:02.720 15:33:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.720 15:33:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.720 15:33:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:02.720 15:33:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.720 15:33:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:02.720 15:33:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:02.721 15:33:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:02.721 15:33:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58264 00:06:02.721 15:33:01 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:02.721 15:33:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.721 15:33:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58264' 00:06:02.721 Process app_repeat pid: 58264 00:06:02.721 15:33:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:02.721 spdk_app_start Round 0 00:06:02.721 15:33:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:02.721 15:33:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58264 /var/tmp/spdk-nbd.sock 00:06:02.721 15:33:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58264 ']' 00:06:02.721 15:33:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.721 15:33:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.721 15:33:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.721 15:33:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.721 15:33:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.721 [2024-11-25 15:33:01.234510] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:06:02.721 [2024-11-25 15:33:01.234654] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58264 ] 00:06:02.979 [2024-11-25 15:33:01.421960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.979 [2024-11-25 15:33:01.531932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.979 [2024-11-25 15:33:01.531968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.549 15:33:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.549 15:33:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:03.550 15:33:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.810 Malloc0 00:06:03.810 15:33:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.070 Malloc1 00:06:04.070 15:33:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.070 15:33:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.070 15:33:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.070 15:33:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.070 15:33:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.070 15:33:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.070 15:33:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.070 15:33:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.070 15:33:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.070 15:33:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.070 15:33:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.070 15:33:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.070 15:33:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:04.070 15:33:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.070 15:33:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.070 15:33:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.331 /dev/nbd0 00:06:04.331 15:33:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.331 15:33:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.331 15:33:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:04.331 15:33:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:04.331 15:33:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.331 15:33:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.331 15:33:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:04.331 15:33:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:04.331 15:33:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.331 15:33:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.331 15:33:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.331 1+0 records in 00:06:04.331 1+0 records out 00:06:04.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300541 s, 13.6 MB/s 00:06:04.331 15:33:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.331 15:33:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:04.331 15:33:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.331 15:33:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.331 15:33:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:04.331 15:33:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.331 15:33:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.331 15:33:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:04.591 /dev/nbd1 00:06:04.591 15:33:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.591 15:33:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.591 15:33:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:04.591 15:33:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:04.591 15:33:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.591 15:33:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.591 15:33:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:04.591 15:33:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:04.591 15:33:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.591 15:33:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.591 15:33:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.591 1+0 records in 00:06:04.591 1+0 records out 00:06:04.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344255 s, 11.9 MB/s 00:06:04.591 15:33:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.591 15:33:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:04.591 15:33:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.591 15:33:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.591 15:33:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:04.591 15:33:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.591 15:33:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.591 15:33:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.591 15:33:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.591 15:33:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.852 15:33:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.852 { 00:06:04.852 "nbd_device": "/dev/nbd0", 00:06:04.852 "bdev_name": "Malloc0" 00:06:04.852 }, 00:06:04.852 { 00:06:04.852 "nbd_device": "/dev/nbd1", 00:06:04.852 "bdev_name": "Malloc1" 00:06:04.852 } 00:06:04.852 ]' 00:06:04.852 15:33:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.852 { 00:06:04.852 "nbd_device": "/dev/nbd0", 00:06:04.852 "bdev_name": "Malloc0" 00:06:04.852 }, 00:06:04.852 { 00:06:04.852 "nbd_device": "/dev/nbd1", 00:06:04.852 "bdev_name": "Malloc1" 00:06:04.852 } 00:06:04.852 ]' 00:06:04.852 15:33:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.852 15:33:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.852 /dev/nbd1' 00:06:04.852 15:33:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.852 /dev/nbd1' 00:06:04.852 15:33:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.852 15:33:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.852 15:33:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.852 15:33:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.852 15:33:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.852 15:33:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.852 15:33:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.852 15:33:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.852 15:33:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.852 15:33:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.852 15:33:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.853 256+0 records in 00:06:04.853 256+0 records out 00:06:04.853 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127276 s, 82.4 MB/s 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.853 256+0 records in 00:06:04.853 256+0 records out 00:06:04.853 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202047 s, 51.9 MB/s 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.853 256+0 records in 00:06:04.853 256+0 records out 00:06:04.853 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260079 s, 40.3 MB/s 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.853 15:33:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.113 15:33:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.113 15:33:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.113 15:33:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.113 15:33:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.113 15:33:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.113 15:33:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.113 15:33:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.113 15:33:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.113 15:33:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.113 15:33:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.373 15:33:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.373 15:33:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.373 15:33:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.373 15:33:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.373 15:33:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.373 15:33:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.373 15:33:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.373 15:33:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.373 15:33:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.373 15:33:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.373 15:33:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.633 15:33:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.633 15:33:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.633 15:33:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.633 15:33:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.633 15:33:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.633 15:33:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.633 15:33:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.633 15:33:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.633 15:33:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.633 15:33:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.633 15:33:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.633 15:33:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.633 15:33:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.893 15:33:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.275 [2024-11-25 15:33:05.607140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.275 [2024-11-25 15:33:05.708158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.275 [2024-11-25 15:33:05.708164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.275 [2024-11-25 15:33:05.894991] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.275 [2024-11-25 15:33:05.895088] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.184 15:33:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:09.184 spdk_app_start Round 1 00:06:09.184 15:33:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:09.184 15:33:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58264 /var/tmp/spdk-nbd.sock 00:06:09.184 15:33:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58264 ']' 00:06:09.184 15:33:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.184 15:33:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.184 15:33:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.184 15:33:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.184 15:33:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.184 15:33:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.184 15:33:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:09.184 15:33:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.443 Malloc0 00:06:09.443 15:33:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.744 Malloc1 00:06:09.744 15:33:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.744 15:33:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.744 15:33:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.744 15:33:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.744 15:33:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.744 15:33:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.744 15:33:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.744 15:33:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.744 15:33:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.744 15:33:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.744 15:33:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.744 15:33:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.744 15:33:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:09.744 15:33:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.744 15:33:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.744 15:33:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.030 /dev/nbd0 00:06:10.030 15:33:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.030 15:33:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.030 1+0 records in 00:06:10.030 1+0 records out 00:06:10.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043304 s, 9.5 MB/s 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:10.030 15:33:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.030 15:33:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.030 15:33:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.030 /dev/nbd1 00:06:10.030 15:33:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.030 15:33:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.030 1+0 records in 00:06:10.030 1+0 records out 00:06:10.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328517 s, 12.5 MB/s 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:10.030 15:33:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.290 15:33:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.290 15:33:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:10.290 15:33:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.290 15:33:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.290 15:33:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.290 15:33:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.290 15:33:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.290 15:33:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.290 { 00:06:10.290 "nbd_device": "/dev/nbd0", 00:06:10.290 "bdev_name": "Malloc0" 00:06:10.290 }, 00:06:10.290 { 00:06:10.290 "nbd_device": "/dev/nbd1", 00:06:10.290 "bdev_name": "Malloc1" 00:06:10.290 } 00:06:10.290 ]' 00:06:10.290 15:33:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.290 15:33:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.290 { 00:06:10.290 "nbd_device": "/dev/nbd0", 00:06:10.290 "bdev_name": "Malloc0" 00:06:10.290 }, 00:06:10.290 { 00:06:10.290 "nbd_device": "/dev/nbd1", 00:06:10.290 "bdev_name": "Malloc1" 00:06:10.290 } 00:06:10.290 ]' 00:06:10.290 15:33:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.290 /dev/nbd1' 00:06:10.290 15:33:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.290 /dev/nbd1' 00:06:10.290 15:33:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.550 15:33:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.550 15:33:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.550 15:33:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.550 15:33:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.550 15:33:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.550 15:33:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.550 15:33:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.550 15:33:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.550 15:33:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.550 15:33:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.550 15:33:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.550 256+0 records in 00:06:10.550 256+0 records out 00:06:10.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128982 s, 81.3 MB/s 00:06:10.550 15:33:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.550 15:33:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:10.550 256+0 records in 00:06:10.550 256+0 records out 00:06:10.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202658 s, 51.7 MB/s 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:10.551 256+0 records in 00:06:10.551 256+0 records out 00:06:10.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238095 s, 44.0 MB/s 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.551 15:33:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.810 15:33:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.810 15:33:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.810 15:33:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.810 15:33:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.810 15:33:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.810 15:33:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.810 15:33:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.810 15:33:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.810 15:33:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.810 15:33:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.070 15:33:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.070 15:33:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.070 15:33:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.070 15:33:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.070 15:33:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.070 15:33:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.070 15:33:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.070 15:33:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.070 15:33:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.070 15:33:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.070 15:33:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.070 15:33:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.070 15:33:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.070 15:33:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.331 15:33:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.331 15:33:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.331 15:33:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.331 15:33:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:11.331 15:33:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.331 15:33:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.331 15:33:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.331 15:33:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.331 15:33:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.331 15:33:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.592 15:33:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.972 [2024-11-25 15:33:11.224393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.972 [2024-11-25 15:33:11.324684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.972 [2024-11-25 15:33:11.324714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.972 [2024-11-25 15:33:11.506584] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.972 [2024-11-25 15:33:11.506663] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.879 15:33:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:14.879 spdk_app_start Round 2 00:06:14.879 15:33:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:14.879 15:33:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58264 /var/tmp/spdk-nbd.sock 00:06:14.879 15:33:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58264 ']' 00:06:14.879 15:33:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.879 15:33:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.879 15:33:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.879 15:33:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.879 15:33:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.879 15:33:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.879 15:33:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:14.879 15:33:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.139 Malloc0 00:06:15.139 15:33:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.400 Malloc1 00:06:15.400 15:33:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.400 15:33:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.400 15:33:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.400 15:33:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.400 15:33:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.400 15:33:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.400 15:33:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.400 15:33:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.400 15:33:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.400 15:33:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.400 15:33:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.400 15:33:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.400 15:33:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.400 15:33:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.400 15:33:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.400 15:33:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.659 /dev/nbd0 00:06:15.659 15:33:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.659 15:33:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.659 15:33:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:15.659 15:33:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:15.659 15:33:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.659 15:33:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.659 15:33:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:15.659 15:33:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:15.659 15:33:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.659 15:33:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.659 15:33:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.659 1+0 records in 00:06:15.659 1+0 records out 00:06:15.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385707 s, 10.6 MB/s 00:06:15.659 15:33:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.659 15:33:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:15.659 15:33:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.659 15:33:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.659 15:33:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:15.659 15:33:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.659 15:33:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.659 15:33:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.659 /dev/nbd1 00:06:15.920 15:33:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.920 15:33:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.920 15:33:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:15.920 15:33:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:15.920 15:33:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.920 15:33:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.920 15:33:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:15.920 15:33:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:15.920 15:33:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.920 15:33:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.920 15:33:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.920 1+0 records in 00:06:15.920 1+0 records out 00:06:15.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348736 s, 11.7 MB/s 00:06:15.920 15:33:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.920 15:33:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:15.920 15:33:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.920 15:33:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.920 15:33:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:15.920 15:33:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.920 15:33:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.920 15:33:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.920 15:33:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.920 15:33:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.920 15:33:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.920 { 00:06:15.920 "nbd_device": "/dev/nbd0", 00:06:15.920 "bdev_name": "Malloc0" 00:06:15.920 }, 00:06:15.920 { 00:06:15.920 "nbd_device": "/dev/nbd1", 00:06:15.920 "bdev_name": "Malloc1" 00:06:15.920 } 00:06:15.920 ]' 00:06:15.920 15:33:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.920 { 00:06:15.920 "nbd_device": "/dev/nbd0", 00:06:15.920 "bdev_name": "Malloc0" 00:06:15.920 }, 00:06:15.920 { 00:06:15.920 "nbd_device": "/dev/nbd1", 00:06:15.920 "bdev_name": "Malloc1" 00:06:15.920 } 00:06:15.920 ]' 00:06:15.920 15:33:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.179 /dev/nbd1' 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.179 /dev/nbd1' 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.179 256+0 records in 00:06:16.179 256+0 records out 00:06:16.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142316 s, 73.7 MB/s 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.179 256+0 records in 00:06:16.179 256+0 records out 00:06:16.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242337 s, 43.3 MB/s 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.179 256+0 records in 00:06:16.179 256+0 records out 00:06:16.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247805 s, 42.3 MB/s 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.179 15:33:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.180 15:33:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:16.180 15:33:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.180 15:33:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.439 15:33:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.439 15:33:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.439 15:33:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.439 15:33:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.439 15:33:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.439 15:33:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.439 15:33:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.439 15:33:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.439 15:33:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.439 15:33:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.699 15:33:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.699 15:33:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.699 15:33:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.699 15:33:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.699 15:33:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.699 15:33:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.699 15:33:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.699 15:33:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.699 15:33:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.699 15:33:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.699 15:33:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.959 15:33:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.959 15:33:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.959 15:33:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.959 15:33:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.959 15:33:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.959 15:33:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.959 15:33:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:16.959 15:33:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.959 15:33:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.959 15:33:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.959 15:33:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.959 15:33:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.959 15:33:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.218 15:33:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:18.620 [2024-11-25 15:33:16.910357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.620 [2024-11-25 15:33:17.013559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.620 [2024-11-25 15:33:17.013563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.620 [2024-11-25 15:33:17.200373] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.620 [2024-11-25 15:33:17.200444] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.528 15:33:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58264 /var/tmp/spdk-nbd.sock 00:06:20.528 15:33:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58264 ']' 00:06:20.528 15:33:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.528 15:33:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.528 15:33:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.528 15:33:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.528 15:33:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.528 15:33:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.528 15:33:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:20.528 15:33:19 event.app_repeat -- event/event.sh@39 -- # killprocess 58264 00:06:20.528 15:33:19 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58264 ']' 00:06:20.528 15:33:19 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58264 00:06:20.528 15:33:19 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:20.528 15:33:19 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.528 15:33:19 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58264 00:06:20.528 15:33:19 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.528 15:33:19 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.529 killing process with pid 58264 00:06:20.529 15:33:19 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58264' 00:06:20.529 15:33:19 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58264 00:06:20.529 15:33:19 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58264 00:06:21.467 spdk_app_start is called in Round 0. 00:06:21.467 Shutdown signal received, stop current app iteration 00:06:21.467 Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 reinitialization... 00:06:21.467 spdk_app_start is called in Round 1. 00:06:21.467 Shutdown signal received, stop current app iteration 00:06:21.467 Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 reinitialization... 00:06:21.467 spdk_app_start is called in Round 2. 00:06:21.467 Shutdown signal received, stop current app iteration 00:06:21.467 Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 reinitialization... 00:06:21.467 spdk_app_start is called in Round 3. 00:06:21.467 Shutdown signal received, stop current app iteration 00:06:21.467 15:33:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:21.467 15:33:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:21.467 00:06:21.467 real 0m18.893s 00:06:21.467 user 0m40.379s 00:06:21.467 sys 0m2.690s 00:06:21.467 15:33:20 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.467 15:33:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.467 ************************************ 00:06:21.467 END TEST app_repeat 00:06:21.467 ************************************ 00:06:21.467 15:33:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:21.467 15:33:20 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:21.467 15:33:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.467 15:33:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.467 15:33:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.467 ************************************ 00:06:21.467 START TEST cpu_locks 00:06:21.467 ************************************ 00:06:21.467 15:33:20 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:21.727 * Looking for test storage... 00:06:21.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:21.727 15:33:20 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:21.727 15:33:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:21.727 15:33:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:21.727 15:33:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:21.727 15:33:20 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.728 15:33:20 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.728 15:33:20 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.728 15:33:20 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:21.728 15:33:20 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.728 15:33:20 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:21.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.728 --rc genhtml_branch_coverage=1 00:06:21.728 --rc genhtml_function_coverage=1 00:06:21.728 --rc genhtml_legend=1 00:06:21.728 --rc geninfo_all_blocks=1 00:06:21.728 --rc geninfo_unexecuted_blocks=1 00:06:21.728 00:06:21.728 ' 00:06:21.728 15:33:20 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:21.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.728 --rc genhtml_branch_coverage=1 00:06:21.728 --rc genhtml_function_coverage=1 00:06:21.728 --rc genhtml_legend=1 00:06:21.728 --rc geninfo_all_blocks=1 00:06:21.728 --rc geninfo_unexecuted_blocks=1 00:06:21.728 00:06:21.728 ' 00:06:21.728 15:33:20 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:21.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.728 --rc genhtml_branch_coverage=1 00:06:21.728 --rc genhtml_function_coverage=1 00:06:21.728 --rc genhtml_legend=1 00:06:21.728 --rc geninfo_all_blocks=1 00:06:21.728 --rc geninfo_unexecuted_blocks=1 00:06:21.728 00:06:21.728 ' 00:06:21.728 15:33:20 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:21.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.728 --rc genhtml_branch_coverage=1 00:06:21.728 --rc genhtml_function_coverage=1 00:06:21.728 --rc genhtml_legend=1 00:06:21.728 --rc geninfo_all_blocks=1 00:06:21.728 --rc geninfo_unexecuted_blocks=1 00:06:21.728 00:06:21.728 ' 00:06:21.728 15:33:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:21.728 15:33:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:21.728 15:33:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:21.728 15:33:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:21.728 15:33:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.728 15:33:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.728 15:33:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.728 ************************************ 00:06:21.728 START TEST default_locks 00:06:21.728 ************************************ 00:06:21.728 15:33:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:21.728 15:33:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58711 00:06:21.728 15:33:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.728 15:33:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58711 00:06:21.728 15:33:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58711 ']' 00:06:21.728 15:33:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.728 15:33:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.728 15:33:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.728 15:33:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.728 15:33:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.988 [2024-11-25 15:33:20.448291] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:06:21.988 [2024-11-25 15:33:20.448422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58711 ] 00:06:21.988 [2024-11-25 15:33:20.621445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.248 [2024-11-25 15:33:20.731844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.187 15:33:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.187 15:33:21 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:23.187 15:33:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58711 00:06:23.187 15:33:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58711 00:06:23.187 15:33:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.446 15:33:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58711 00:06:23.446 15:33:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58711 ']' 00:06:23.446 15:33:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58711 00:06:23.446 15:33:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:23.446 15:33:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.446 15:33:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58711 00:06:23.446 killing process with pid 58711 00:06:23.446 15:33:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.447 15:33:22 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.447 15:33:22 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58711' 00:06:23.447 15:33:22 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58711 00:06:23.447 15:33:22 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58711 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58711 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58711 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58711 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58711 ']' 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.986 ERROR: process (pid: 58711) is no longer running 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.986 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58711) - No such process 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:25.986 00:06:25.986 real 0m3.903s 00:06:25.986 user 0m3.851s 00:06:25.986 sys 0m0.647s 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.986 15:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.986 ************************************ 00:06:25.986 END TEST default_locks 00:06:25.986 ************************************ 00:06:25.986 15:33:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:25.986 15:33:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.986 15:33:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.986 15:33:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.986 ************************************ 00:06:25.986 START TEST default_locks_via_rpc 00:06:25.986 ************************************ 00:06:25.986 15:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:25.986 15:33:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58781 00:06:25.986 15:33:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.986 15:33:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58781 00:06:25.986 15:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58781 ']' 00:06:25.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.986 15:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.986 15:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.986 15:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.986 15:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.986 15:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.986 [2024-11-25 15:33:24.423541] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:06:25.986 [2024-11-25 15:33:24.423748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58781 ] 00:06:25.986 [2024-11-25 15:33:24.593450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.246 [2024-11-25 15:33:24.700500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.185 15:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.185 15:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:27.185 15:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:27.185 15:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.185 15:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.185 15:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.185 15:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:27.185 15:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.185 15:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.185 15:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.185 15:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:27.185 15:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.185 15:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.185 15:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.185 15:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58781 00:06:27.185 15:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.185 15:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58781 00:06:27.445 15:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58781 00:06:27.446 15:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58781 ']' 00:06:27.446 15:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58781 00:06:27.446 15:33:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:27.446 15:33:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.446 15:33:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58781 00:06:27.446 killing process with pid 58781 00:06:27.446 15:33:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.446 15:33:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.446 15:33:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58781' 00:06:27.446 15:33:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58781 00:06:27.446 15:33:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58781 00:06:29.988 ************************************ 00:06:29.988 END TEST default_locks_via_rpc 00:06:29.988 ************************************ 00:06:29.988 00:06:29.988 real 0m3.939s 00:06:29.988 user 0m3.874s 00:06:29.988 sys 0m0.670s 00:06:29.988 15:33:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.988 15:33:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.988 15:33:28 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:29.988 15:33:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.988 15:33:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.988 15:33:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.988 ************************************ 00:06:29.988 START TEST non_locking_app_on_locked_coremask 00:06:29.988 ************************************ 00:06:29.988 15:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:29.988 15:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58855 00:06:29.988 15:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.988 15:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58855 /var/tmp/spdk.sock 00:06:29.988 15:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58855 ']' 00:06:29.988 15:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.988 15:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.988 15:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.988 15:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.988 15:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.988 [2024-11-25 15:33:28.427200] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:06:29.988 [2024-11-25 15:33:28.427410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58855 ] 00:06:29.988 [2024-11-25 15:33:28.600799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.248 [2024-11-25 15:33:28.702568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.186 15:33:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.186 15:33:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:31.186 15:33:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58871 00:06:31.186 15:33:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:31.186 15:33:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58871 /var/tmp/spdk2.sock 00:06:31.186 15:33:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58871 ']' 00:06:31.186 15:33:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.186 15:33:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.186 15:33:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.186 15:33:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.186 15:33:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.186 [2024-11-25 15:33:29.627243] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:06:31.186 [2024-11-25 15:33:29.627450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58871 ] 00:06:31.186 [2024-11-25 15:33:29.795060] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.186 [2024-11-25 15:33:29.795124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.446 [2024-11-25 15:33:30.005857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.014 15:33:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.014 15:33:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:34.014 15:33:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58855 00:06:34.014 15:33:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58855 00:06:34.014 15:33:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.584 15:33:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58855 00:06:34.584 15:33:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58855 ']' 00:06:34.584 15:33:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58855 00:06:34.584 15:33:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:34.584 15:33:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.584 15:33:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58855 00:06:34.584 killing process with pid 58855 00:06:34.584 15:33:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.584 15:33:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.584 15:33:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58855' 00:06:34.584 15:33:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58855 00:06:34.584 15:33:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58855 00:06:39.894 15:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58871 00:06:39.894 15:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58871 ']' 00:06:39.894 15:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58871 00:06:39.894 15:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:39.894 15:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.894 15:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58871 00:06:39.894 15:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.894 15:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.894 15:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58871' 00:06:39.894 killing process with pid 58871 00:06:39.894 15:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58871 00:06:39.894 15:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58871 00:06:41.274 00:06:41.274 real 0m11.427s 00:06:41.274 user 0m11.694s 00:06:41.274 sys 0m1.364s 00:06:41.274 15:33:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.274 ************************************ 00:06:41.274 END TEST non_locking_app_on_locked_coremask 00:06:41.274 ************************************ 00:06:41.274 15:33:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.274 15:33:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:41.274 15:33:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.274 15:33:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.274 15:33:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.274 ************************************ 00:06:41.274 START TEST locking_app_on_unlocked_coremask 00:06:41.274 ************************************ 00:06:41.274 15:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:41.274 15:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59019 00:06:41.274 15:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:41.274 15:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59019 /var/tmp/spdk.sock 00:06:41.274 15:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59019 ']' 00:06:41.274 15:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.274 15:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.274 15:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.274 15:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.274 15:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.274 [2024-11-25 15:33:39.920174] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:06:41.274 [2024-11-25 15:33:39.920291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59019 ] 00:06:41.534 [2024-11-25 15:33:40.074652] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.534 [2024-11-25 15:33:40.074700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.534 [2024-11-25 15:33:40.185013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.473 15:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.473 15:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:42.473 15:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59035 00:06:42.473 15:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.473 15:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59035 /var/tmp/spdk2.sock 00:06:42.473 15:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59035 ']' 00:06:42.473 15:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.473 15:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.473 15:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.473 15:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.473 15:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.473 [2024-11-25 15:33:41.104993] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:06:42.473 [2024-11-25 15:33:41.105218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59035 ] 00:06:42.733 [2024-11-25 15:33:41.270747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.993 [2024-11-25 15:33:41.493515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.535 15:33:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.535 15:33:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:45.535 15:33:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59035 00:06:45.535 15:33:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59035 00:06:45.535 15:33:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.535 15:33:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59019 00:06:45.535 15:33:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59019 ']' 00:06:45.535 15:33:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59019 00:06:45.535 15:33:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:45.535 15:33:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.535 15:33:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59019 00:06:45.535 15:33:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.535 15:33:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.535 15:33:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59019' 00:06:45.535 killing process with pid 59019 00:06:45.535 15:33:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59019 00:06:45.535 15:33:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59019 00:06:50.861 15:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59035 00:06:50.861 15:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59035 ']' 00:06:50.861 15:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59035 00:06:50.861 15:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:50.861 15:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.861 15:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59035 00:06:50.861 killing process with pid 59035 00:06:50.861 15:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.861 15:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.861 15:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59035' 00:06:50.861 15:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59035 00:06:50.861 15:33:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59035 00:06:52.244 ************************************ 00:06:52.244 END TEST locking_app_on_unlocked_coremask 00:06:52.244 ************************************ 00:06:52.244 00:06:52.244 real 0m11.059s 00:06:52.244 user 0m11.305s 00:06:52.244 sys 0m1.129s 00:06:52.244 15:33:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.244 15:33:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.504 15:33:50 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:52.504 15:33:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.504 15:33:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.504 15:33:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.504 ************************************ 00:06:52.504 START TEST locking_app_on_locked_coremask 00:06:52.504 ************************************ 00:06:52.504 15:33:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:52.504 15:33:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59181 00:06:52.504 15:33:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.504 15:33:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59181 /var/tmp/spdk.sock 00:06:52.504 15:33:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59181 ']' 00:06:52.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.504 15:33:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.504 15:33:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.504 15:33:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.504 15:33:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.504 15:33:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.504 [2024-11-25 15:33:51.051451] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:06:52.504 [2024-11-25 15:33:51.051562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59181 ] 00:06:52.764 [2024-11-25 15:33:51.222091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.764 [2024-11-25 15:33:51.332212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.705 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.705 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:53.705 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59202 00:06:53.705 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59202 /var/tmp/spdk2.sock 00:06:53.705 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:53.705 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:53.705 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59202 /var/tmp/spdk2.sock 00:06:53.705 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:53.705 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.705 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:53.705 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.705 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59202 /var/tmp/spdk2.sock 00:06:53.705 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59202 ']' 00:06:53.705 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.705 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.705 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.705 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.705 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.705 [2024-11-25 15:33:52.248132] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:06:53.705 [2024-11-25 15:33:52.248356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59202 ] 00:06:53.965 [2024-11-25 15:33:52.417025] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59181 has claimed it. 00:06:53.965 [2024-11-25 15:33:52.417087] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:54.226 ERROR: process (pid: 59202) is no longer running 00:06:54.226 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59202) - No such process 00:06:54.226 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.226 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:54.226 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:54.226 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:54.226 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:54.226 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:54.226 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59181 00:06:54.226 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59181 00:06:54.226 15:33:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.798 15:33:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59181 00:06:54.798 15:33:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59181 ']' 00:06:54.798 15:33:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59181 00:06:54.798 15:33:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:54.798 15:33:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.798 15:33:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59181 00:06:54.798 killing process with pid 59181 00:06:54.798 15:33:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.798 15:33:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.798 15:33:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59181' 00:06:54.798 15:33:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59181 00:06:54.798 15:33:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59181 00:06:57.335 ************************************ 00:06:57.335 END TEST locking_app_on_locked_coremask 00:06:57.335 ************************************ 00:06:57.335 00:06:57.335 real 0m4.704s 00:06:57.335 user 0m4.856s 00:06:57.335 sys 0m0.814s 00:06:57.335 15:33:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.335 15:33:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.335 15:33:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:57.335 15:33:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.335 15:33:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.335 15:33:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.335 ************************************ 00:06:57.335 START TEST locking_overlapped_coremask 00:06:57.335 ************************************ 00:06:57.335 15:33:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:57.335 15:33:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59266 00:06:57.335 15:33:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:57.335 15:33:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59266 /var/tmp/spdk.sock 00:06:57.335 15:33:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59266 ']' 00:06:57.335 15:33:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.335 15:33:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.335 15:33:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.335 15:33:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.335 15:33:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.335 [2024-11-25 15:33:55.827352] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:06:57.335 [2024-11-25 15:33:55.827469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59266 ] 00:06:57.335 [2024-11-25 15:33:56.003303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.593 [2024-11-25 15:33:56.118096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.593 [2024-11-25 15:33:56.118204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.593 [2024-11-25 15:33:56.118269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.528 15:33:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.528 15:33:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:58.528 15:33:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59284 00:06:58.528 15:33:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:58.528 15:33:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59284 /var/tmp/spdk2.sock 00:06:58.528 15:33:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:58.528 15:33:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59284 /var/tmp/spdk2.sock 00:06:58.528 15:33:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:58.528 15:33:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.528 15:33:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:58.528 15:33:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.528 15:33:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59284 /var/tmp/spdk2.sock 00:06:58.528 15:33:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59284 ']' 00:06:58.528 15:33:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.528 15:33:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.528 15:33:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.528 15:33:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.528 15:33:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.528 [2024-11-25 15:33:57.063606] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:06:58.528 [2024-11-25 15:33:57.064248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59284 ] 00:06:58.787 [2024-11-25 15:33:57.239235] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59266 has claimed it. 00:06:58.787 [2024-11-25 15:33:57.239300] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:59.046 ERROR: process (pid: 59284) is no longer running 00:06:59.046 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59284) - No such process 00:06:59.046 15:33:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.046 15:33:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:59.046 15:33:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:59.046 15:33:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:59.046 15:33:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:59.046 15:33:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:59.046 15:33:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:59.046 15:33:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:59.046 15:33:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:59.046 15:33:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:59.046 15:33:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59266 00:06:59.046 15:33:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59266 ']' 00:06:59.046 15:33:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59266 00:06:59.046 15:33:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:59.046 15:33:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.046 15:33:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59266 00:06:59.305 15:33:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.305 15:33:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.305 15:33:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59266' 00:06:59.305 killing process with pid 59266 00:06:59.305 15:33:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59266 00:06:59.305 15:33:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59266 00:07:01.856 00:07:01.856 real 0m4.391s 00:07:01.856 user 0m11.967s 00:07:01.856 sys 0m0.573s 00:07:01.856 15:34:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.856 15:34:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.856 ************************************ 00:07:01.856 END TEST locking_overlapped_coremask 00:07:01.856 ************************************ 00:07:01.856 15:34:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:01.856 15:34:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.856 15:34:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.856 15:34:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.856 ************************************ 00:07:01.856 START TEST locking_overlapped_coremask_via_rpc 00:07:01.856 ************************************ 00:07:01.856 15:34:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:01.856 15:34:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59356 00:07:01.856 15:34:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:01.856 15:34:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59356 /var/tmp/spdk.sock 00:07:01.856 15:34:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59356 ']' 00:07:01.856 15:34:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.856 15:34:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.856 15:34:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.856 15:34:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.856 15:34:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.856 [2024-11-25 15:34:00.278741] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:07:01.856 [2024-11-25 15:34:00.278859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59356 ] 00:07:01.856 [2024-11-25 15:34:00.454335] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:01.856 [2024-11-25 15:34:00.454390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.131 [2024-11-25 15:34:00.573898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.131 [2024-11-25 15:34:00.573976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.131 [2024-11-25 15:34:00.574045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.069 15:34:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.069 15:34:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:03.069 15:34:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:03.069 15:34:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59374 00:07:03.069 15:34:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59374 /var/tmp/spdk2.sock 00:07:03.069 15:34:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59374 ']' 00:07:03.069 15:34:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.069 15:34:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.069 15:34:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.069 15:34:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.069 15:34:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.069 [2024-11-25 15:34:01.549427] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:07:03.069 [2024-11-25 15:34:01.549645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59374 ] 00:07:03.069 [2024-11-25 15:34:01.725481] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.069 [2024-11-25 15:34:01.725531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.329 [2024-11-25 15:34:01.964647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.329 [2024-11-25 15:34:01.968146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.329 [2024-11-25 15:34:01.968190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:05.867 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.867 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:05.867 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:05.867 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.867 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.867 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.867 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.867 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.868 [2024-11-25 15:34:04.184229] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59356 has claimed it. 00:07:05.868 request: 00:07:05.868 { 00:07:05.868 "method": "framework_enable_cpumask_locks", 00:07:05.868 "req_id": 1 00:07:05.868 } 00:07:05.868 Got JSON-RPC error response 00:07:05.868 response: 00:07:05.868 { 00:07:05.868 "code": -32603, 00:07:05.868 "message": "Failed to claim CPU core: 2" 00:07:05.868 } 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59356 /var/tmp/spdk.sock 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59356 ']' 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59374 /var/tmp/spdk2.sock 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59374 ']' 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.868 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.128 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.128 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:06.128 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:06.128 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:06.128 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:06.128 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:06.128 00:07:06.128 real 0m4.466s 00:07:06.128 user 0m1.347s 00:07:06.128 sys 0m0.189s 00:07:06.128 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.128 15:34:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.128 ************************************ 00:07:06.128 END TEST locking_overlapped_coremask_via_rpc 00:07:06.128 ************************************ 00:07:06.128 15:34:04 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:06.128 15:34:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59356 ]] 00:07:06.128 15:34:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59356 00:07:06.128 15:34:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59356 ']' 00:07:06.128 15:34:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59356 00:07:06.128 15:34:04 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:06.128 15:34:04 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.128 15:34:04 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59356 00:07:06.129 killing process with pid 59356 00:07:06.129 15:34:04 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.129 15:34:04 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.129 15:34:04 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59356' 00:07:06.129 15:34:04 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59356 00:07:06.129 15:34:04 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59356 00:07:08.666 15:34:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59374 ]] 00:07:08.666 15:34:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59374 00:07:08.666 15:34:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59374 ']' 00:07:08.666 15:34:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59374 00:07:08.666 15:34:07 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:08.666 15:34:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.666 15:34:07 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59374 00:07:08.666 15:34:07 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:08.666 15:34:07 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:08.666 15:34:07 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59374' 00:07:08.666 killing process with pid 59374 00:07:08.666 15:34:07 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59374 00:07:08.666 15:34:07 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59374 00:07:11.203 15:34:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:11.203 15:34:09 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:11.203 15:34:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59356 ]] 00:07:11.203 15:34:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59356 00:07:11.203 15:34:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59356 ']' 00:07:11.203 15:34:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59356 00:07:11.203 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59356) - No such process 00:07:11.203 Process with pid 59356 is not found 00:07:11.203 Process with pid 59374 is not found 00:07:11.203 15:34:09 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59356 is not found' 00:07:11.203 15:34:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59374 ]] 00:07:11.203 15:34:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59374 00:07:11.203 15:34:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59374 ']' 00:07:11.203 15:34:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59374 00:07:11.203 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59374) - No such process 00:07:11.203 15:34:09 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59374 is not found' 00:07:11.203 15:34:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:11.203 00:07:11.203 real 0m49.607s 00:07:11.203 user 1m25.995s 00:07:11.203 sys 0m6.572s 00:07:11.203 15:34:09 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.203 15:34:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.203 ************************************ 00:07:11.203 END TEST cpu_locks 00:07:11.203 ************************************ 00:07:11.203 00:07:11.203 real 1m19.438s 00:07:11.203 user 2m23.311s 00:07:11.203 sys 0m10.472s 00:07:11.203 15:34:09 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.203 15:34:09 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.203 ************************************ 00:07:11.203 END TEST event 00:07:11.203 ************************************ 00:07:11.203 15:34:09 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:11.203 15:34:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.203 15:34:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.203 15:34:09 -- common/autotest_common.sh@10 -- # set +x 00:07:11.203 ************************************ 00:07:11.203 START TEST thread 00:07:11.203 ************************************ 00:07:11.203 15:34:09 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:11.478 * Looking for test storage... 00:07:11.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:11.478 15:34:09 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:11.478 15:34:09 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:11.478 15:34:09 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.478 15:34:10 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.478 15:34:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.478 15:34:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.478 15:34:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.478 15:34:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.478 15:34:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.478 15:34:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.478 15:34:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.478 15:34:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.478 15:34:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.478 15:34:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.478 15:34:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.478 15:34:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:11.478 15:34:10 thread -- scripts/common.sh@345 -- # : 1 00:07:11.478 15:34:10 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.478 15:34:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.478 15:34:10 thread -- scripts/common.sh@365 -- # decimal 1 00:07:11.478 15:34:10 thread -- scripts/common.sh@353 -- # local d=1 00:07:11.478 15:34:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.478 15:34:10 thread -- scripts/common.sh@355 -- # echo 1 00:07:11.478 15:34:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.478 15:34:10 thread -- scripts/common.sh@366 -- # decimal 2 00:07:11.478 15:34:10 thread -- scripts/common.sh@353 -- # local d=2 00:07:11.478 15:34:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.478 15:34:10 thread -- scripts/common.sh@355 -- # echo 2 00:07:11.478 15:34:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.478 15:34:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.478 15:34:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.478 15:34:10 thread -- scripts/common.sh@368 -- # return 0 00:07:11.478 15:34:10 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.478 15:34:10 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.478 --rc genhtml_branch_coverage=1 00:07:11.478 --rc genhtml_function_coverage=1 00:07:11.478 --rc genhtml_legend=1 00:07:11.478 --rc geninfo_all_blocks=1 00:07:11.478 --rc geninfo_unexecuted_blocks=1 00:07:11.478 00:07:11.478 ' 00:07:11.478 15:34:10 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.478 --rc genhtml_branch_coverage=1 00:07:11.478 --rc genhtml_function_coverage=1 00:07:11.478 --rc genhtml_legend=1 00:07:11.478 --rc geninfo_all_blocks=1 00:07:11.478 --rc geninfo_unexecuted_blocks=1 00:07:11.478 00:07:11.478 ' 00:07:11.478 15:34:10 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.478 --rc genhtml_branch_coverage=1 00:07:11.479 --rc genhtml_function_coverage=1 00:07:11.479 --rc genhtml_legend=1 00:07:11.479 --rc geninfo_all_blocks=1 00:07:11.479 --rc geninfo_unexecuted_blocks=1 00:07:11.479 00:07:11.479 ' 00:07:11.479 15:34:10 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.479 --rc genhtml_branch_coverage=1 00:07:11.479 --rc genhtml_function_coverage=1 00:07:11.479 --rc genhtml_legend=1 00:07:11.479 --rc geninfo_all_blocks=1 00:07:11.479 --rc geninfo_unexecuted_blocks=1 00:07:11.479 00:07:11.479 ' 00:07:11.479 15:34:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:11.479 15:34:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:11.479 15:34:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.479 15:34:10 thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.479 ************************************ 00:07:11.479 START TEST thread_poller_perf 00:07:11.479 ************************************ 00:07:11.479 15:34:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:11.479 [2024-11-25 15:34:10.126157] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:07:11.479 [2024-11-25 15:34:10.126332] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59569 ] 00:07:11.759 [2024-11-25 15:34:10.290435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.759 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:11.759 [2024-11-25 15:34:10.400101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.141 [2024-11-25T15:34:11.822Z] ====================================== 00:07:13.141 [2024-11-25T15:34:11.822Z] busy:2296646290 (cyc) 00:07:13.141 [2024-11-25T15:34:11.822Z] total_run_count: 416000 00:07:13.141 [2024-11-25T15:34:11.822Z] tsc_hz: 2290000000 (cyc) 00:07:13.141 [2024-11-25T15:34:11.822Z] ====================================== 00:07:13.141 [2024-11-25T15:34:11.822Z] poller_cost: 5520 (cyc), 2410 (nsec) 00:07:13.141 00:07:13.141 real 0m1.547s 00:07:13.141 user 0m1.353s 00:07:13.141 sys 0m0.088s 00:07:13.141 15:34:11 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.141 15:34:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:13.141 ************************************ 00:07:13.141 END TEST thread_poller_perf 00:07:13.141 ************************************ 00:07:13.141 15:34:11 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:13.141 15:34:11 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:13.141 15:34:11 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.141 15:34:11 thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.141 ************************************ 00:07:13.141 START TEST thread_poller_perf 00:07:13.141 ************************************ 00:07:13.141 15:34:11 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:13.141 [2024-11-25 15:34:11.743193] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:07:13.141 [2024-11-25 15:34:11.743280] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59611 ] 00:07:13.402 [2024-11-25 15:34:11.914729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.402 [2024-11-25 15:34:12.023647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.402 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:14.783 [2024-11-25T15:34:13.464Z] ====================================== 00:07:14.783 [2024-11-25T15:34:13.464Z] busy:2293487320 (cyc) 00:07:14.783 [2024-11-25T15:34:13.464Z] total_run_count: 5523000 00:07:14.783 [2024-11-25T15:34:13.464Z] tsc_hz: 2290000000 (cyc) 00:07:14.783 [2024-11-25T15:34:13.464Z] ====================================== 00:07:14.783 [2024-11-25T15:34:13.464Z] poller_cost: 415 (cyc), 181 (nsec) 00:07:14.783 00:07:14.783 real 0m1.543s 00:07:14.783 user 0m1.335s 00:07:14.783 sys 0m0.101s 00:07:14.783 15:34:13 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.783 ************************************ 00:07:14.783 END TEST thread_poller_perf 00:07:14.783 ************************************ 00:07:14.783 15:34:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:14.783 15:34:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:14.784 00:07:14.784 real 0m3.448s 00:07:14.784 user 0m2.864s 00:07:14.784 sys 0m0.384s 00:07:14.784 ************************************ 00:07:14.784 END TEST thread 00:07:14.784 ************************************ 00:07:14.784 15:34:13 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.784 15:34:13 thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.784 15:34:13 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:14.784 15:34:13 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:14.784 15:34:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.784 15:34:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.784 15:34:13 -- common/autotest_common.sh@10 -- # set +x 00:07:14.784 ************************************ 00:07:14.784 START TEST app_cmdline 00:07:14.784 ************************************ 00:07:14.784 15:34:13 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:14.784 * Looking for test storage... 00:07:15.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:15.044 15:34:13 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:15.044 15:34:13 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:15.044 15:34:13 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:15.044 15:34:13 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.044 15:34:13 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:15.044 15:34:13 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.044 15:34:13 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:15.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.044 --rc genhtml_branch_coverage=1 00:07:15.044 --rc genhtml_function_coverage=1 00:07:15.044 --rc genhtml_legend=1 00:07:15.044 --rc geninfo_all_blocks=1 00:07:15.044 --rc geninfo_unexecuted_blocks=1 00:07:15.044 00:07:15.044 ' 00:07:15.044 15:34:13 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:15.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.044 --rc genhtml_branch_coverage=1 00:07:15.044 --rc genhtml_function_coverage=1 00:07:15.044 --rc genhtml_legend=1 00:07:15.044 --rc geninfo_all_blocks=1 00:07:15.044 --rc geninfo_unexecuted_blocks=1 00:07:15.044 00:07:15.044 ' 00:07:15.044 15:34:13 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:15.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.044 --rc genhtml_branch_coverage=1 00:07:15.044 --rc genhtml_function_coverage=1 00:07:15.044 --rc genhtml_legend=1 00:07:15.044 --rc geninfo_all_blocks=1 00:07:15.044 --rc geninfo_unexecuted_blocks=1 00:07:15.044 00:07:15.044 ' 00:07:15.044 15:34:13 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:15.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.044 --rc genhtml_branch_coverage=1 00:07:15.044 --rc genhtml_function_coverage=1 00:07:15.044 --rc genhtml_legend=1 00:07:15.044 --rc geninfo_all_blocks=1 00:07:15.044 --rc geninfo_unexecuted_blocks=1 00:07:15.044 00:07:15.044 ' 00:07:15.044 15:34:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:15.044 15:34:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59700 00:07:15.044 15:34:13 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:15.044 15:34:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59700 00:07:15.044 15:34:13 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59700 ']' 00:07:15.044 15:34:13 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.044 15:34:13 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.044 15:34:13 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.044 15:34:13 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.044 15:34:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:15.045 [2024-11-25 15:34:13.671301] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:07:15.045 [2024-11-25 15:34:13.671496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59700 ] 00:07:15.305 [2024-11-25 15:34:13.844613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.305 [2024-11-25 15:34:13.955085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.244 15:34:14 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.244 15:34:14 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:16.244 15:34:14 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:16.502 { 00:07:16.502 "version": "SPDK v25.01-pre git sha1 ff2e6bfe4", 00:07:16.502 "fields": { 00:07:16.502 "major": 25, 00:07:16.502 "minor": 1, 00:07:16.502 "patch": 0, 00:07:16.502 "suffix": "-pre", 00:07:16.502 "commit": "ff2e6bfe4" 00:07:16.502 } 00:07:16.502 } 00:07:16.502 15:34:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:16.502 15:34:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:16.502 15:34:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:16.502 15:34:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:16.502 15:34:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:16.502 15:34:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:16.502 15:34:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:16.502 15:34:14 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.502 15:34:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:16.503 15:34:14 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.503 15:34:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:16.503 15:34:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:16.503 15:34:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.503 15:34:15 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:16.503 15:34:15 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.503 15:34:15 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.503 15:34:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.503 15:34:15 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.503 15:34:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.503 15:34:15 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.503 15:34:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.503 15:34:15 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.503 15:34:15 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:16.503 15:34:15 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.762 request: 00:07:16.762 { 00:07:16.762 "method": "env_dpdk_get_mem_stats", 00:07:16.762 "req_id": 1 00:07:16.762 } 00:07:16.762 Got JSON-RPC error response 00:07:16.762 response: 00:07:16.762 { 00:07:16.762 "code": -32601, 00:07:16.762 "message": "Method not found" 00:07:16.762 } 00:07:16.762 15:34:15 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:16.762 15:34:15 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:16.762 15:34:15 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:16.762 15:34:15 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:16.762 15:34:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59700 00:07:16.762 15:34:15 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59700 ']' 00:07:16.762 15:34:15 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59700 00:07:16.762 15:34:15 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:16.762 15:34:15 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.762 15:34:15 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59700 00:07:16.762 killing process with pid 59700 00:07:16.762 15:34:15 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.762 15:34:15 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.762 15:34:15 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59700' 00:07:16.762 15:34:15 app_cmdline -- common/autotest_common.sh@973 -- # kill 59700 00:07:16.762 15:34:15 app_cmdline -- common/autotest_common.sh@978 -- # wait 59700 00:07:19.299 ************************************ 00:07:19.299 END TEST app_cmdline 00:07:19.299 ************************************ 00:07:19.299 00:07:19.299 real 0m4.154s 00:07:19.299 user 0m4.365s 00:07:19.299 sys 0m0.586s 00:07:19.299 15:34:17 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.299 15:34:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:19.299 15:34:17 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:19.299 15:34:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.299 15:34:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.299 15:34:17 -- common/autotest_common.sh@10 -- # set +x 00:07:19.299 ************************************ 00:07:19.299 START TEST version 00:07:19.300 ************************************ 00:07:19.300 15:34:17 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:19.300 * Looking for test storage... 00:07:19.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:19.300 15:34:17 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:19.300 15:34:17 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:19.300 15:34:17 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:19.300 15:34:17 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:19.300 15:34:17 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.300 15:34:17 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.300 15:34:17 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.300 15:34:17 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.300 15:34:17 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.300 15:34:17 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.300 15:34:17 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.300 15:34:17 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.300 15:34:17 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.300 15:34:17 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.300 15:34:17 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.300 15:34:17 version -- scripts/common.sh@344 -- # case "$op" in 00:07:19.300 15:34:17 version -- scripts/common.sh@345 -- # : 1 00:07:19.300 15:34:17 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.300 15:34:17 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.300 15:34:17 version -- scripts/common.sh@365 -- # decimal 1 00:07:19.300 15:34:17 version -- scripts/common.sh@353 -- # local d=1 00:07:19.300 15:34:17 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.300 15:34:17 version -- scripts/common.sh@355 -- # echo 1 00:07:19.300 15:34:17 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.300 15:34:17 version -- scripts/common.sh@366 -- # decimal 2 00:07:19.300 15:34:17 version -- scripts/common.sh@353 -- # local d=2 00:07:19.300 15:34:17 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.300 15:34:17 version -- scripts/common.sh@355 -- # echo 2 00:07:19.300 15:34:17 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.300 15:34:17 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.300 15:34:17 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.300 15:34:17 version -- scripts/common.sh@368 -- # return 0 00:07:19.300 15:34:17 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.300 15:34:17 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:19.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.300 --rc genhtml_branch_coverage=1 00:07:19.300 --rc genhtml_function_coverage=1 00:07:19.300 --rc genhtml_legend=1 00:07:19.300 --rc geninfo_all_blocks=1 00:07:19.300 --rc geninfo_unexecuted_blocks=1 00:07:19.300 00:07:19.300 ' 00:07:19.300 15:34:17 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:19.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.300 --rc genhtml_branch_coverage=1 00:07:19.300 --rc genhtml_function_coverage=1 00:07:19.300 --rc genhtml_legend=1 00:07:19.300 --rc geninfo_all_blocks=1 00:07:19.300 --rc geninfo_unexecuted_blocks=1 00:07:19.300 00:07:19.300 ' 00:07:19.300 15:34:17 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:19.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.300 --rc genhtml_branch_coverage=1 00:07:19.300 --rc genhtml_function_coverage=1 00:07:19.300 --rc genhtml_legend=1 00:07:19.300 --rc geninfo_all_blocks=1 00:07:19.300 --rc geninfo_unexecuted_blocks=1 00:07:19.300 00:07:19.300 ' 00:07:19.300 15:34:17 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:19.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.300 --rc genhtml_branch_coverage=1 00:07:19.300 --rc genhtml_function_coverage=1 00:07:19.300 --rc genhtml_legend=1 00:07:19.300 --rc geninfo_all_blocks=1 00:07:19.300 --rc geninfo_unexecuted_blocks=1 00:07:19.300 00:07:19.300 ' 00:07:19.300 15:34:17 version -- app/version.sh@17 -- # get_header_version major 00:07:19.300 15:34:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:19.300 15:34:17 version -- app/version.sh@14 -- # cut -f2 00:07:19.300 15:34:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:19.300 15:34:17 version -- app/version.sh@17 -- # major=25 00:07:19.300 15:34:17 version -- app/version.sh@18 -- # get_header_version minor 00:07:19.300 15:34:17 version -- app/version.sh@14 -- # cut -f2 00:07:19.300 15:34:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:19.300 15:34:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:19.300 15:34:17 version -- app/version.sh@18 -- # minor=1 00:07:19.300 15:34:17 version -- app/version.sh@19 -- # get_header_version patch 00:07:19.300 15:34:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:19.300 15:34:17 version -- app/version.sh@14 -- # cut -f2 00:07:19.300 15:34:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:19.300 15:34:17 version -- app/version.sh@19 -- # patch=0 00:07:19.300 15:34:17 version -- app/version.sh@20 -- # get_header_version suffix 00:07:19.300 15:34:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:19.300 15:34:17 version -- app/version.sh@14 -- # cut -f2 00:07:19.300 15:34:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:19.300 15:34:17 version -- app/version.sh@20 -- # suffix=-pre 00:07:19.300 15:34:17 version -- app/version.sh@22 -- # version=25.1 00:07:19.300 15:34:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:19.300 15:34:17 version -- app/version.sh@28 -- # version=25.1rc0 00:07:19.300 15:34:17 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:19.300 15:34:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:19.300 15:34:17 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:19.300 15:34:17 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:19.300 ************************************ 00:07:19.300 END TEST version 00:07:19.300 ************************************ 00:07:19.300 00:07:19.300 real 0m0.311s 00:07:19.300 user 0m0.184s 00:07:19.300 sys 0m0.184s 00:07:19.300 15:34:17 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.300 15:34:17 version -- common/autotest_common.sh@10 -- # set +x 00:07:19.300 15:34:17 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:19.300 15:34:17 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:19.300 15:34:17 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:19.300 15:34:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.300 15:34:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.300 15:34:17 -- common/autotest_common.sh@10 -- # set +x 00:07:19.300 ************************************ 00:07:19.300 START TEST bdev_raid 00:07:19.300 ************************************ 00:07:19.300 15:34:17 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:19.561 * Looking for test storage... 00:07:19.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:19.561 15:34:18 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:19.561 15:34:18 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:07:19.561 15:34:18 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:19.561 15:34:18 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.561 15:34:18 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:19.561 15:34:18 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.561 15:34:18 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:19.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.561 --rc genhtml_branch_coverage=1 00:07:19.561 --rc genhtml_function_coverage=1 00:07:19.561 --rc genhtml_legend=1 00:07:19.561 --rc geninfo_all_blocks=1 00:07:19.561 --rc geninfo_unexecuted_blocks=1 00:07:19.561 00:07:19.561 ' 00:07:19.561 15:34:18 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:19.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.561 --rc genhtml_branch_coverage=1 00:07:19.561 --rc genhtml_function_coverage=1 00:07:19.561 --rc genhtml_legend=1 00:07:19.561 --rc geninfo_all_blocks=1 00:07:19.561 --rc geninfo_unexecuted_blocks=1 00:07:19.561 00:07:19.561 ' 00:07:19.561 15:34:18 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:19.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.561 --rc genhtml_branch_coverage=1 00:07:19.561 --rc genhtml_function_coverage=1 00:07:19.561 --rc genhtml_legend=1 00:07:19.561 --rc geninfo_all_blocks=1 00:07:19.561 --rc geninfo_unexecuted_blocks=1 00:07:19.561 00:07:19.561 ' 00:07:19.561 15:34:18 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:19.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.561 --rc genhtml_branch_coverage=1 00:07:19.561 --rc genhtml_function_coverage=1 00:07:19.561 --rc genhtml_legend=1 00:07:19.561 --rc geninfo_all_blocks=1 00:07:19.561 --rc geninfo_unexecuted_blocks=1 00:07:19.561 00:07:19.561 ' 00:07:19.561 15:34:18 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:19.561 15:34:18 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:19.561 15:34:18 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:19.561 15:34:18 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:19.561 15:34:18 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:19.561 15:34:18 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:19.561 15:34:18 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:19.561 15:34:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.561 15:34:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.561 15:34:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.561 ************************************ 00:07:19.561 START TEST raid1_resize_data_offset_test 00:07:19.561 ************************************ 00:07:19.561 15:34:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:07:19.561 15:34:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59882 00:07:19.561 15:34:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59882' 00:07:19.561 Process raid pid: 59882 00:07:19.561 15:34:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:19.561 15:34:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59882 00:07:19.562 15:34:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59882 ']' 00:07:19.562 15:34:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.562 15:34:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.562 15:34:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.562 15:34:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.562 15:34:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.822 [2024-11-25 15:34:18.277159] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:07:19.822 [2024-11-25 15:34:18.277291] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.822 [2024-11-25 15:34:18.450760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.081 [2024-11-25 15:34:18.558399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.081 [2024-11-25 15:34:18.754143] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.081 [2024-11-25 15:34:18.754231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.651 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.651 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:07:20.651 15:34:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:20.651 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.651 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.651 malloc0 00:07:20.651 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.651 15:34:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:20.651 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.651 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.651 malloc1 00:07:20.651 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.651 15:34:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:20.651 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.651 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.651 null0 00:07:20.651 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.651 15:34:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:20.651 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.651 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.651 [2024-11-25 15:34:19.265592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:20.651 [2024-11-25 15:34:19.267337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:20.651 [2024-11-25 15:34:19.267382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:20.651 [2024-11-25 15:34:19.267518] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:20.651 [2024-11-25 15:34:19.267531] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:20.651 [2024-11-25 15:34:19.267779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:20.651 [2024-11-25 15:34:19.267932] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:20.651 [2024-11-25 15:34:19.267945] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:20.651 [2024-11-25 15:34:19.268125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.651 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.651 15:34:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.652 15:34:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:20.652 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.652 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.652 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.652 15:34:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:20.652 15:34:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:20.652 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.652 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.936 [2024-11-25 15:34:19.329480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:20.936 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.936 15:34:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:20.936 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.936 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.203 malloc2 00:07:21.203 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.203 15:34:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:21.203 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.203 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.203 [2024-11-25 15:34:19.855958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:21.203 [2024-11-25 15:34:19.872714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:21.203 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.203 [2024-11-25 15:34:19.874478] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:21.203 15:34:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.203 15:34:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:21.203 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.203 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.462 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.462 15:34:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:21.462 15:34:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59882 00:07:21.462 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59882 ']' 00:07:21.462 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59882 00:07:21.462 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:07:21.462 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.462 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59882 00:07:21.462 killing process with pid 59882 00:07:21.462 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.462 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.462 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59882' 00:07:21.462 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59882 00:07:21.462 [2024-11-25 15:34:19.967728] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:21.462 15:34:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59882 00:07:21.462 [2024-11-25 15:34:19.968003] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:21.462 [2024-11-25 15:34:19.968072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.463 [2024-11-25 15:34:19.968089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:21.463 [2024-11-25 15:34:20.002064] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.463 [2024-11-25 15:34:20.002372] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.463 [2024-11-25 15:34:20.002391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:23.374 [2024-11-25 15:34:21.717278] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:24.314 ************************************ 00:07:24.314 END TEST raid1_resize_data_offset_test 00:07:24.314 ************************************ 00:07:24.314 15:34:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:24.314 00:07:24.314 real 0m4.584s 00:07:24.314 user 0m4.493s 00:07:24.314 sys 0m0.523s 00:07:24.314 15:34:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.314 15:34:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.314 15:34:22 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:24.314 15:34:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:24.314 15:34:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.314 15:34:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:24.314 ************************************ 00:07:24.314 START TEST raid0_resize_superblock_test 00:07:24.314 ************************************ 00:07:24.314 15:34:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:24.314 15:34:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:24.314 15:34:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59966 00:07:24.314 Process raid pid: 59966 00:07:24.314 15:34:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:24.314 15:34:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59966' 00:07:24.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.314 15:34:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59966 00:07:24.314 15:34:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59966 ']' 00:07:24.314 15:34:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.314 15:34:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.314 15:34:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.314 15:34:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.314 15:34:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.314 [2024-11-25 15:34:22.930430] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:07:24.314 [2024-11-25 15:34:22.930541] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.574 [2024-11-25 15:34:23.105264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.574 [2024-11-25 15:34:23.213274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.833 [2024-11-25 15:34:23.409987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.833 [2024-11-25 15:34:23.410029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.093 15:34:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.093 15:34:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:25.093 15:34:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:25.093 15:34:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.093 15:34:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.663 malloc0 00:07:25.663 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.664 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:25.664 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.664 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.664 [2024-11-25 15:34:24.278807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:25.664 [2024-11-25 15:34:24.278872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.664 [2024-11-25 15:34:24.278897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:25.664 [2024-11-25 15:34:24.278908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.664 [2024-11-25 15:34:24.280935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.664 [2024-11-25 15:34:24.280979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:25.664 pt0 00:07:25.664 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.664 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:25.664 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.664 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.924 6fcf287a-55fc-4579-b887-a5be0db21f34 00:07:25.924 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.924 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:25.924 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.924 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.924 bda95b12-5fc7-4455-b156-16b7cdf9a1e4 00:07:25.924 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.924 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:25.924 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.924 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.924 76fa25e7-db16-413a-b3e2-8a192c8c8d87 00:07:25.924 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.924 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:25.924 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:25.924 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.924 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.924 [2024-11-25 15:34:24.411892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev bda95b12-5fc7-4455-b156-16b7cdf9a1e4 is claimed 00:07:25.924 [2024-11-25 15:34:24.411973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 76fa25e7-db16-413a-b3e2-8a192c8c8d87 is claimed 00:07:25.924 [2024-11-25 15:34:24.412116] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:25.925 [2024-11-25 15:34:24.412132] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:25.925 [2024-11-25 15:34:24.412370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:25.925 [2024-11-25 15:34:24.412563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:25.925 [2024-11-25 15:34:24.412582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:25.925 [2024-11-25 15:34:24.412736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.925 [2024-11-25 15:34:24.503899] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.925 [2024-11-25 15:34:24.547785] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:25.925 [2024-11-25 15:34:24.547853] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'bda95b12-5fc7-4455-b156-16b7cdf9a1e4' was resized: old size 131072, new size 204800 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.925 [2024-11-25 15:34:24.559697] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:25.925 [2024-11-25 15:34:24.559760] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '76fa25e7-db16-413a-b3e2-8a192c8c8d87' was resized: old size 131072, new size 204800 00:07:25.925 [2024-11-25 15:34:24.559831] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.925 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.193 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.193 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:26.193 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:26.193 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:26.193 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:26.193 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:26.193 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.194 [2024-11-25 15:34:24.635667] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.194 [2024-11-25 15:34:24.663429] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:26.194 [2024-11-25 15:34:24.663530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:26.194 [2024-11-25 15:34:24.663558] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:26.194 [2024-11-25 15:34:24.663595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:26.194 [2024-11-25 15:34:24.663715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.194 [2024-11-25 15:34:24.663798] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:26.194 [2024-11-25 15:34:24.663859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.194 [2024-11-25 15:34:24.675363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:26.194 [2024-11-25 15:34:24.675416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.194 [2024-11-25 15:34:24.675434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:26.194 [2024-11-25 15:34:24.675444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.194 [2024-11-25 15:34:24.677475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.194 [2024-11-25 15:34:24.677513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:26.194 [2024-11-25 15:34:24.679049] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev bda95b12-5fc7-4455-b156-16b7cdf9a1e4 00:07:26.194 [2024-11-25 15:34:24.679112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev bda95b12-5fc7-4455-b156-16b7cdf9a1e4 is claimed 00:07:26.194 [2024-11-25 15:34:24.679241] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 76fa25e7-db16-413a-b3e2-8a192c8c8d87 00:07:26.194 [2024-11-25 15:34:24.679261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 76fa25e7-db16-413a-b3e2-8a192c8c8d87 is claimed 00:07:26.194 [2024-11-25 15:34:24.679378] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 76fa25e7-db16-413a-b3e2-8a192c8c8d87 (2) smaller than existing raid bdev Raid (3) 00:07:26.194 [2024-11-25 15:34:24.679399] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev bda95b12-5fc7-4455-b156-16b7cdf9a1e4: File exists 00:07:26.194 [2024-11-25 15:34:24.679439] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:26.194 [2024-11-25 15:34:24.679449] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:26.194 [2024-11-25 15:34:24.679673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:26.194 [2024-11-25 15:34:24.679807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:26.194 [2024-11-25 15:34:24.679814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:26.194 [2024-11-25 15:34:24.679987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.194 pt0 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.194 [2024-11-25 15:34:24.703606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59966 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59966 ']' 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59966 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59966 00:07:26.194 killing process with pid 59966 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59966' 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59966 00:07:26.194 [2024-11-25 15:34:24.773956] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:26.194 [2024-11-25 15:34:24.774022] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.194 [2024-11-25 15:34:24.774061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:26.194 [2024-11-25 15:34:24.774069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:26.194 15:34:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59966 00:07:27.577 [2024-11-25 15:34:26.116729] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:28.517 ************************************ 00:07:28.517 END TEST raid0_resize_superblock_test 00:07:28.517 ************************************ 00:07:28.517 15:34:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:28.517 00:07:28.517 real 0m4.327s 00:07:28.517 user 0m4.477s 00:07:28.517 sys 0m0.545s 00:07:28.517 15:34:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.517 15:34:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.777 15:34:27 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:28.777 15:34:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:28.777 15:34:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.777 15:34:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:28.777 ************************************ 00:07:28.777 START TEST raid1_resize_superblock_test 00:07:28.777 ************************************ 00:07:28.777 15:34:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:28.777 15:34:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:28.777 15:34:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60064 00:07:28.777 Process raid pid: 60064 00:07:28.777 15:34:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:28.777 15:34:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60064' 00:07:28.777 15:34:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60064 00:07:28.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.777 15:34:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60064 ']' 00:07:28.777 15:34:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.777 15:34:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.777 15:34:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.777 15:34:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.777 15:34:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.777 [2024-11-25 15:34:27.323197] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:07:28.777 [2024-11-25 15:34:27.323310] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.037 [2024-11-25 15:34:27.484123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.037 [2024-11-25 15:34:27.592596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.297 [2024-11-25 15:34:27.785558] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.297 [2024-11-25 15:34:27.785687] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.557 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.557 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:29.557 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:29.557 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.557 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.128 malloc0 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.128 [2024-11-25 15:34:28.650557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:30.128 [2024-11-25 15:34:28.650624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.128 [2024-11-25 15:34:28.650649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:30.128 [2024-11-25 15:34:28.650662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.128 [2024-11-25 15:34:28.652708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.128 [2024-11-25 15:34:28.652760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:30.128 pt0 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.128 be7a3c0a-ae37-4c51-bcaa-821002a3e59d 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.128 b2bf0044-55ec-44e1-8d87-cbfd748a6748 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.128 ce04eb9d-8864-4495-97f5-23958001bfd6 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.128 [2024-11-25 15:34:28.783396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b2bf0044-55ec-44e1-8d87-cbfd748a6748 is claimed 00:07:30.128 [2024-11-25 15:34:28.783543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ce04eb9d-8864-4495-97f5-23958001bfd6 is claimed 00:07:30.128 [2024-11-25 15:34:28.783680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:30.128 [2024-11-25 15:34:28.783699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:30.128 [2024-11-25 15:34:28.783941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:30.128 [2024-11-25 15:34:28.784168] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:30.128 [2024-11-25 15:34:28.784180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:30.128 [2024-11-25 15:34:28.784311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:30.128 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:30.391 [2024-11-25 15:34:28.891391] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.391 [2024-11-25 15:34:28.939267] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:30.391 [2024-11-25 15:34:28.939338] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b2bf0044-55ec-44e1-8d87-cbfd748a6748' was resized: old size 131072, new size 204800 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.391 [2024-11-25 15:34:28.951218] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:30.391 [2024-11-25 15:34:28.951284] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ce04eb9d-8864-4495-97f5-23958001bfd6' was resized: old size 131072, new size 204800 00:07:30.391 [2024-11-25 15:34:28.951351] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.391 15:34:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.391 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:30.391 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:30.391 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:30.391 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.391 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.391 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.391 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:30.391 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:30.391 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:30.391 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:30.391 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:30.391 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.391 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.391 [2024-11-25 15:34:29.067119] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.661 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.661 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:30.661 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:30.661 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:30.661 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:30.661 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.661 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.661 [2024-11-25 15:34:29.110826] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:30.661 [2024-11-25 15:34:29.110895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:30.661 [2024-11-25 15:34:29.110920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:30.661 [2024-11-25 15:34:29.111065] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:30.661 [2024-11-25 15:34:29.111286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.661 [2024-11-25 15:34:29.111356] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.661 [2024-11-25 15:34:29.111369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:30.661 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.661 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:30.661 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.661 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.661 [2024-11-25 15:34:29.122755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:30.661 [2024-11-25 15:34:29.122866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.661 [2024-11-25 15:34:29.122889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:30.661 [2024-11-25 15:34:29.122901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.661 [2024-11-25 15:34:29.124951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.661 [2024-11-25 15:34:29.124984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:30.661 [2024-11-25 15:34:29.126538] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b2bf0044-55ec-44e1-8d87-cbfd748a6748 00:07:30.661 [2024-11-25 15:34:29.126604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b2bf0044-55ec-44e1-8d87-cbfd748a6748 is claimed 00:07:30.661 [2024-11-25 15:34:29.126720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ce04eb9d-8864-4495-97f5-23958001bfd6 00:07:30.661 [2024-11-25 15:34:29.126739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ce04eb9d-8864-4495-97f5-23958001bfd6 is claimed 00:07:30.661 [2024-11-25 15:34:29.126848] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev ce04eb9d-8864-4495-97f5-23958001bfd6 (2) smaller than existing raid bdev Raid (3) 00:07:30.661 [2024-11-25 15:34:29.126866] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev b2bf0044-55ec-44e1-8d87-cbfd748a6748: File exists 00:07:30.661 [2024-11-25 15:34:29.126907] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:30.661 [2024-11-25 15:34:29.126918] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:30.661 [2024-11-25 15:34:29.127164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:30.661 [2024-11-25 15:34:29.127325] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:30.661 [2024-11-25 15:34:29.127334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:30.661 [2024-11-25 15:34:29.127516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.661 pt0 00:07:30.661 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.661 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:30.661 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.661 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.662 [2024-11-25 15:34:29.151112] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60064 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60064 ']' 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60064 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60064 00:07:30.662 killing process with pid 60064 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60064' 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60064 00:07:30.662 [2024-11-25 15:34:29.230501] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.662 [2024-11-25 15:34:29.230554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.662 [2024-11-25 15:34:29.230595] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.662 [2024-11-25 15:34:29.230603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:30.662 15:34:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60064 00:07:32.042 [2024-11-25 15:34:30.578322] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.981 15:34:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:32.981 00:07:32.981 real 0m4.382s 00:07:32.981 user 0m4.592s 00:07:32.981 sys 0m0.547s 00:07:32.981 ************************************ 00:07:32.981 END TEST raid1_resize_superblock_test 00:07:32.981 ************************************ 00:07:32.981 15:34:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.981 15:34:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.239 15:34:31 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:33.239 15:34:31 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:33.239 15:34:31 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:33.239 15:34:31 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:33.239 15:34:31 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:33.239 15:34:31 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:33.239 15:34:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:33.239 15:34:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.239 15:34:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:33.239 ************************************ 00:07:33.239 START TEST raid_function_test_raid0 00:07:33.239 ************************************ 00:07:33.239 15:34:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:33.239 15:34:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:33.239 15:34:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:33.239 15:34:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:33.239 Process raid pid: 60161 00:07:33.239 15:34:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60161 00:07:33.239 15:34:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:33.239 15:34:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60161' 00:07:33.239 15:34:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60161 00:07:33.239 15:34:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60161 ']' 00:07:33.239 15:34:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.239 15:34:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.239 15:34:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.239 15:34:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.240 15:34:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:33.240 [2024-11-25 15:34:31.800671] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:07:33.240 [2024-11-25 15:34:31.800875] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.499 [2024-11-25 15:34:31.972801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.499 [2024-11-25 15:34:32.082214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.759 [2024-11-25 15:34:32.280654] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.759 [2024-11-25 15:34:32.280771] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.019 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.019 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:34.019 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:34.019 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.019 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:34.019 Base_1 00:07:34.019 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.019 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:34.019 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.019 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:34.279 Base_2 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:34.279 [2024-11-25 15:34:32.709491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:34.279 [2024-11-25 15:34:32.711231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:34.279 [2024-11-25 15:34:32.711298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:34.279 [2024-11-25 15:34:32.711310] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:34.279 [2024-11-25 15:34:32.711557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:34.279 [2024-11-25 15:34:32.711707] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:34.279 [2024-11-25 15:34:32.711715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:34.279 [2024-11-25 15:34:32.711862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:34.279 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:34.279 [2024-11-25 15:34:32.929181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:34.279 /dev/nbd0 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:34.539 1+0 records in 00:07:34.539 1+0 records out 00:07:34.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214317 s, 19.1 MB/s 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:34.539 15:34:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:34.539 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:34.539 { 00:07:34.540 "nbd_device": "/dev/nbd0", 00:07:34.540 "bdev_name": "raid" 00:07:34.540 } 00:07:34.540 ]' 00:07:34.540 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:34.540 { 00:07:34.540 "nbd_device": "/dev/nbd0", 00:07:34.540 "bdev_name": "raid" 00:07:34.540 } 00:07:34.540 ]' 00:07:34.540 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:34.799 4096+0 records in 00:07:34.799 4096+0 records out 00:07:34.799 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0260737 s, 80.4 MB/s 00:07:34.799 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:34.799 4096+0 records in 00:07:34.800 4096+0 records out 00:07:34.800 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.179239 s, 11.7 MB/s 00:07:34.800 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:34.800 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:35.059 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:35.059 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:35.059 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:35.060 128+0 records in 00:07:35.060 128+0 records out 00:07:35.060 65536 bytes (66 kB, 64 KiB) copied, 0.00129435 s, 50.6 MB/s 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:35.060 2035+0 records in 00:07:35.060 2035+0 records out 00:07:35.060 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0147679 s, 70.6 MB/s 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:35.060 456+0 records in 00:07:35.060 456+0 records out 00:07:35.060 233472 bytes (233 kB, 228 KiB) copied, 0.00400419 s, 58.3 MB/s 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.060 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:35.319 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:35.319 [2024-11-25 15:34:33.797953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.319 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:35.319 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:35.319 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.319 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.319 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:35.319 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:35.319 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.319 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:35.319 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:35.319 15:34:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:35.579 15:34:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:35.579 15:34:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:35.579 15:34:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:35.579 15:34:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:35.579 15:34:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:35.579 15:34:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:35.579 15:34:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:35.579 15:34:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:35.579 15:34:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:35.579 15:34:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:35.579 15:34:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:35.579 15:34:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60161 00:07:35.579 15:34:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60161 ']' 00:07:35.579 15:34:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60161 00:07:35.579 15:34:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:35.579 15:34:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.579 15:34:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60161 00:07:35.579 killing process with pid 60161 00:07:35.579 15:34:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.580 15:34:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.580 15:34:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60161' 00:07:35.580 15:34:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60161 00:07:35.580 [2024-11-25 15:34:34.114566] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.580 [2024-11-25 15:34:34.114664] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.580 [2024-11-25 15:34:34.114713] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.580 [2024-11-25 15:34:34.114728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:35.580 15:34:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60161 00:07:35.839 [2024-11-25 15:34:34.309248] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.778 ************************************ 00:07:36.778 END TEST raid_function_test_raid0 00:07:36.778 ************************************ 00:07:36.778 15:34:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:36.778 00:07:36.778 real 0m3.642s 00:07:36.778 user 0m4.239s 00:07:36.778 sys 0m0.882s 00:07:36.778 15:34:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.778 15:34:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:36.778 15:34:35 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:36.778 15:34:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:36.778 15:34:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.778 15:34:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.779 ************************************ 00:07:36.779 START TEST raid_function_test_concat 00:07:36.779 ************************************ 00:07:36.779 15:34:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:36.779 15:34:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:36.779 15:34:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:36.779 15:34:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:36.779 15:34:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60287 00:07:36.779 15:34:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:36.779 Process raid pid: 60287 00:07:36.779 15:34:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60287' 00:07:36.779 15:34:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60287 00:07:36.779 15:34:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60287 ']' 00:07:36.779 15:34:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.779 15:34:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.779 15:34:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.779 15:34:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.779 15:34:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:37.036 [2024-11-25 15:34:35.517209] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:07:37.037 [2024-11-25 15:34:35.517409] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.037 [2024-11-25 15:34:35.692702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.295 [2024-11-25 15:34:35.804112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.554 [2024-11-25 15:34:35.997657] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.554 [2024-11-25 15:34:35.997769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:37.812 Base_1 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:37.812 Base_2 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:37.812 [2024-11-25 15:34:36.423171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:37.812 [2024-11-25 15:34:36.424962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:37.812 [2024-11-25 15:34:36.425026] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:37.812 [2024-11-25 15:34:36.425055] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:37.812 [2024-11-25 15:34:36.425306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:37.812 [2024-11-25 15:34:36.425461] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:37.812 [2024-11-25 15:34:36.425471] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:37.812 [2024-11-25 15:34:36.425609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:37.812 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:37.813 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:38.071 [2024-11-25 15:34:36.658825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:38.071 /dev/nbd0 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:38.071 1+0 records in 00:07:38.071 1+0 records out 00:07:38.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385424 s, 10.6 MB/s 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:38.071 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:38.330 { 00:07:38.330 "nbd_device": "/dev/nbd0", 00:07:38.330 "bdev_name": "raid" 00:07:38.330 } 00:07:38.330 ]' 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:38.330 { 00:07:38.330 "nbd_device": "/dev/nbd0", 00:07:38.330 "bdev_name": "raid" 00:07:38.330 } 00:07:38.330 ]' 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:38.330 15:34:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:38.330 4096+0 records in 00:07:38.330 4096+0 records out 00:07:38.330 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0276691 s, 75.8 MB/s 00:07:38.330 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:38.589 4096+0 records in 00:07:38.589 4096+0 records out 00:07:38.589 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.1812 s, 11.6 MB/s 00:07:38.589 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:38.589 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:38.589 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:38.589 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:38.589 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:38.589 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:38.589 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:38.589 128+0 records in 00:07:38.589 128+0 records out 00:07:38.590 65536 bytes (66 kB, 64 KiB) copied, 0.00106229 s, 61.7 MB/s 00:07:38.590 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:38.590 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:38.590 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:38.590 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:38.590 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:38.590 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:38.590 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:38.590 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:38.590 2035+0 records in 00:07:38.590 2035+0 records out 00:07:38.590 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0133644 s, 78.0 MB/s 00:07:38.590 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:38.590 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:38.590 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:38.849 456+0 records in 00:07:38.849 456+0 records out 00:07:38.849 233472 bytes (233 kB, 228 KiB) copied, 0.00347003 s, 67.3 MB/s 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:38.849 [2024-11-25 15:34:37.508058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:38.849 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:39.108 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:39.108 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:39.108 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:39.108 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:39.108 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:39.108 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:39.108 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:39.108 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:39.108 15:34:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:39.108 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:39.108 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:39.108 15:34:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60287 00:07:39.108 15:34:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60287 ']' 00:07:39.108 15:34:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60287 00:07:39.108 15:34:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:39.108 15:34:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.108 15:34:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60287 00:07:39.368 killing process with pid 60287 00:07:39.368 15:34:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.368 15:34:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.368 15:34:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60287' 00:07:39.368 15:34:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60287 00:07:39.368 [2024-11-25 15:34:37.821049] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.368 [2024-11-25 15:34:37.821155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.368 [2024-11-25 15:34:37.821207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.368 15:34:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60287 00:07:39.368 [2024-11-25 15:34:37.821218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:39.368 [2024-11-25 15:34:38.018158] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:40.749 15:34:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:40.749 00:07:40.749 real 0m3.628s 00:07:40.749 user 0m4.216s 00:07:40.749 sys 0m0.887s 00:07:40.749 ************************************ 00:07:40.749 END TEST raid_function_test_concat 00:07:40.749 ************************************ 00:07:40.749 15:34:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.749 15:34:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:40.749 15:34:39 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:40.749 15:34:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:40.749 15:34:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.749 15:34:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:40.749 ************************************ 00:07:40.749 START TEST raid0_resize_test 00:07:40.749 ************************************ 00:07:40.749 15:34:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:40.749 15:34:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:40.749 15:34:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:40.749 15:34:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:40.749 15:34:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:40.749 15:34:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:40.749 15:34:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:40.749 15:34:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:40.749 Process raid pid: 60408 00:07:40.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.750 15:34:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:40.750 15:34:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60408 00:07:40.750 15:34:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:40.750 15:34:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60408' 00:07:40.750 15:34:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60408 00:07:40.750 15:34:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60408 ']' 00:07:40.750 15:34:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.750 15:34:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.750 15:34:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.750 15:34:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.750 15:34:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.750 [2024-11-25 15:34:39.217418] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:07:40.750 [2024-11-25 15:34:39.217629] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.750 [2024-11-25 15:34:39.391094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.013 [2024-11-25 15:34:39.498734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.278 [2024-11-25 15:34:39.692383] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.278 [2024-11-25 15:34:39.692498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.538 Base_1 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.538 Base_2 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.538 [2024-11-25 15:34:40.063715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:41.538 [2024-11-25 15:34:40.065470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:41.538 [2024-11-25 15:34:40.065576] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:41.538 [2024-11-25 15:34:40.065613] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:41.538 [2024-11-25 15:34:40.065892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:41.538 [2024-11-25 15:34:40.066081] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:41.538 [2024-11-25 15:34:40.066127] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:41.538 [2024-11-25 15:34:40.066337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.538 [2024-11-25 15:34:40.075668] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:41.538 [2024-11-25 15:34:40.075730] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:41.538 true 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.538 [2024-11-25 15:34:40.091823] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.538 [2024-11-25 15:34:40.135552] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:41.538 [2024-11-25 15:34:40.135574] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:41.538 [2024-11-25 15:34:40.135604] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:41.538 true 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:41.538 [2024-11-25 15:34:40.147699] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60408 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60408 ']' 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60408 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.538 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60408 00:07:41.798 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.798 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.798 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60408' 00:07:41.798 killing process with pid 60408 00:07:41.798 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60408 00:07:41.798 [2024-11-25 15:34:40.232622] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.798 [2024-11-25 15:34:40.232740] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.798 [2024-11-25 15:34:40.232821] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 15:34:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60408 00:07:41.798 ee all in destruct 00:07:41.798 [2024-11-25 15:34:40.232888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:41.798 [2024-11-25 15:34:40.249809] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.737 15:34:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:42.737 00:07:42.737 real 0m2.155s 00:07:42.737 user 0m2.299s 00:07:42.737 sys 0m0.310s 00:07:42.737 15:34:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.737 15:34:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.737 ************************************ 00:07:42.737 END TEST raid0_resize_test 00:07:42.737 ************************************ 00:07:42.737 15:34:41 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:42.737 15:34:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.737 15:34:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.737 15:34:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.737 ************************************ 00:07:42.737 START TEST raid1_resize_test 00:07:42.737 ************************************ 00:07:42.737 15:34:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:42.737 15:34:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:42.737 15:34:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:42.737 15:34:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:42.737 15:34:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:42.737 15:34:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:42.737 15:34:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:42.737 15:34:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:42.737 15:34:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:42.737 15:34:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60470 00:07:42.737 15:34:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:42.737 Process raid pid: 60470 00:07:42.737 15:34:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60470' 00:07:42.737 15:34:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60470 00:07:42.737 15:34:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60470 ']' 00:07:42.737 15:34:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.737 15:34:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.737 15:34:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.737 15:34:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.737 15:34:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.998 [2024-11-25 15:34:41.441850] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:07:42.998 [2024-11-25 15:34:41.442076] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.998 [2024-11-25 15:34:41.614115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.258 [2024-11-25 15:34:41.725292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.258 [2024-11-25 15:34:41.921821] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.258 [2024-11-25 15:34:41.921932] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.829 Base_1 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.829 Base_2 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.829 [2024-11-25 15:34:42.314623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:43.829 [2024-11-25 15:34:42.316396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:43.829 [2024-11-25 15:34:42.316543] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:43.829 [2024-11-25 15:34:42.316564] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:43.829 [2024-11-25 15:34:42.316805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:43.829 [2024-11-25 15:34:42.316943] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:43.829 [2024-11-25 15:34:42.316953] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:43.829 [2024-11-25 15:34:42.317107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.829 [2024-11-25 15:34:42.326588] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:43.829 [2024-11-25 15:34:42.326639] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:43.829 true 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.829 [2024-11-25 15:34:42.342727] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:43.829 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.830 [2024-11-25 15:34:42.382473] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:43.830 [2024-11-25 15:34:42.382496] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:43.830 [2024-11-25 15:34:42.382525] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:43.830 true 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:43.830 [2024-11-25 15:34:42.394645] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60470 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60470 ']' 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60470 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60470 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60470' 00:07:43.830 killing process with pid 60470 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60470 00:07:43.830 [2024-11-25 15:34:42.480118] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.830 [2024-11-25 15:34:42.480240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.830 15:34:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60470 00:07:43.830 [2024-11-25 15:34:42.480739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.830 [2024-11-25 15:34:42.480812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:43.830 [2024-11-25 15:34:42.496864] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.212 15:34:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:45.212 00:07:45.212 real 0m2.182s 00:07:45.212 user 0m2.319s 00:07:45.212 sys 0m0.327s 00:07:45.212 ************************************ 00:07:45.212 END TEST raid1_resize_test 00:07:45.212 ************************************ 00:07:45.212 15:34:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.212 15:34:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.212 15:34:43 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:45.212 15:34:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:45.212 15:34:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:45.212 15:34:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:45.212 15:34:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.212 15:34:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.212 ************************************ 00:07:45.212 START TEST raid_state_function_test 00:07:45.212 ************************************ 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60527 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60527' 00:07:45.212 Process raid pid: 60527 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60527 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60527 ']' 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.212 15:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.212 [2024-11-25 15:34:43.705250] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:07:45.212 [2024-11-25 15:34:43.705366] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.212 [2024-11-25 15:34:43.877986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.472 [2024-11-25 15:34:43.990243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.731 [2024-11-25 15:34:44.187491] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.731 [2024-11-25 15:34:44.187605] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.990 [2024-11-25 15:34:44.528003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:45.990 [2024-11-25 15:34:44.528084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:45.990 [2024-11-25 15:34:44.528094] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.990 [2024-11-25 15:34:44.528103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.990 15:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.990 "name": "Existed_Raid", 00:07:45.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.990 "strip_size_kb": 64, 00:07:45.990 "state": "configuring", 00:07:45.990 "raid_level": "raid0", 00:07:45.990 "superblock": false, 00:07:45.990 "num_base_bdevs": 2, 00:07:45.990 "num_base_bdevs_discovered": 0, 00:07:45.991 "num_base_bdevs_operational": 2, 00:07:45.991 "base_bdevs_list": [ 00:07:45.991 { 00:07:45.991 "name": "BaseBdev1", 00:07:45.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.991 "is_configured": false, 00:07:45.991 "data_offset": 0, 00:07:45.991 "data_size": 0 00:07:45.991 }, 00:07:45.991 { 00:07:45.991 "name": "BaseBdev2", 00:07:45.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.991 "is_configured": false, 00:07:45.991 "data_offset": 0, 00:07:45.991 "data_size": 0 00:07:45.991 } 00:07:45.991 ] 00:07:45.991 }' 00:07:45.991 15:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.991 15:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.560 15:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:46.560 15:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.560 15:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.560 [2024-11-25 15:34:44.975166] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.560 [2024-11-25 15:34:44.975251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:46.560 15:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.560 15:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.560 15:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.560 15:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.560 [2024-11-25 15:34:44.987169] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:46.560 [2024-11-25 15:34:44.987249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:46.560 [2024-11-25 15:34:44.987277] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.560 [2024-11-25 15:34:44.987302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.560 15:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.560 15:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:46.560 15:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.560 15:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.560 [2024-11-25 15:34:45.034271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.560 BaseBdev1 00:07:46.560 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.560 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:46.560 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:46.560 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:46.560 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:46.560 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:46.560 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:46.560 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:46.560 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.560 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.560 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.560 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:46.560 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.560 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.560 [ 00:07:46.560 { 00:07:46.560 "name": "BaseBdev1", 00:07:46.560 "aliases": [ 00:07:46.560 "edafc692-2588-4568-a94f-0903baa547e2" 00:07:46.560 ], 00:07:46.560 "product_name": "Malloc disk", 00:07:46.560 "block_size": 512, 00:07:46.560 "num_blocks": 65536, 00:07:46.560 "uuid": "edafc692-2588-4568-a94f-0903baa547e2", 00:07:46.560 "assigned_rate_limits": { 00:07:46.560 "rw_ios_per_sec": 0, 00:07:46.560 "rw_mbytes_per_sec": 0, 00:07:46.560 "r_mbytes_per_sec": 0, 00:07:46.560 "w_mbytes_per_sec": 0 00:07:46.560 }, 00:07:46.560 "claimed": true, 00:07:46.560 "claim_type": "exclusive_write", 00:07:46.560 "zoned": false, 00:07:46.560 "supported_io_types": { 00:07:46.560 "read": true, 00:07:46.560 "write": true, 00:07:46.560 "unmap": true, 00:07:46.560 "flush": true, 00:07:46.560 "reset": true, 00:07:46.560 "nvme_admin": false, 00:07:46.560 "nvme_io": false, 00:07:46.560 "nvme_io_md": false, 00:07:46.560 "write_zeroes": true, 00:07:46.560 "zcopy": true, 00:07:46.560 "get_zone_info": false, 00:07:46.560 "zone_management": false, 00:07:46.560 "zone_append": false, 00:07:46.560 "compare": false, 00:07:46.560 "compare_and_write": false, 00:07:46.560 "abort": true, 00:07:46.560 "seek_hole": false, 00:07:46.560 "seek_data": false, 00:07:46.561 "copy": true, 00:07:46.561 "nvme_iov_md": false 00:07:46.561 }, 00:07:46.561 "memory_domains": [ 00:07:46.561 { 00:07:46.561 "dma_device_id": "system", 00:07:46.561 "dma_device_type": 1 00:07:46.561 }, 00:07:46.561 { 00:07:46.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.561 "dma_device_type": 2 00:07:46.561 } 00:07:46.561 ], 00:07:46.561 "driver_specific": {} 00:07:46.561 } 00:07:46.561 ] 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.561 "name": "Existed_Raid", 00:07:46.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.561 "strip_size_kb": 64, 00:07:46.561 "state": "configuring", 00:07:46.561 "raid_level": "raid0", 00:07:46.561 "superblock": false, 00:07:46.561 "num_base_bdevs": 2, 00:07:46.561 "num_base_bdevs_discovered": 1, 00:07:46.561 "num_base_bdevs_operational": 2, 00:07:46.561 "base_bdevs_list": [ 00:07:46.561 { 00:07:46.561 "name": "BaseBdev1", 00:07:46.561 "uuid": "edafc692-2588-4568-a94f-0903baa547e2", 00:07:46.561 "is_configured": true, 00:07:46.561 "data_offset": 0, 00:07:46.561 "data_size": 65536 00:07:46.561 }, 00:07:46.561 { 00:07:46.561 "name": "BaseBdev2", 00:07:46.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.561 "is_configured": false, 00:07:46.561 "data_offset": 0, 00:07:46.561 "data_size": 0 00:07:46.561 } 00:07:46.561 ] 00:07:46.561 }' 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.561 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.132 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:47.132 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.132 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.132 [2024-11-25 15:34:45.509488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:47.132 [2024-11-25 15:34:45.509538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:47.132 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.132 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:47.132 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.133 [2024-11-25 15:34:45.521506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.133 [2024-11-25 15:34:45.523339] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:47.133 [2024-11-25 15:34:45.523428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.133 "name": "Existed_Raid", 00:07:47.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.133 "strip_size_kb": 64, 00:07:47.133 "state": "configuring", 00:07:47.133 "raid_level": "raid0", 00:07:47.133 "superblock": false, 00:07:47.133 "num_base_bdevs": 2, 00:07:47.133 "num_base_bdevs_discovered": 1, 00:07:47.133 "num_base_bdevs_operational": 2, 00:07:47.133 "base_bdevs_list": [ 00:07:47.133 { 00:07:47.133 "name": "BaseBdev1", 00:07:47.133 "uuid": "edafc692-2588-4568-a94f-0903baa547e2", 00:07:47.133 "is_configured": true, 00:07:47.133 "data_offset": 0, 00:07:47.133 "data_size": 65536 00:07:47.133 }, 00:07:47.133 { 00:07:47.133 "name": "BaseBdev2", 00:07:47.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.133 "is_configured": false, 00:07:47.133 "data_offset": 0, 00:07:47.133 "data_size": 0 00:07:47.133 } 00:07:47.133 ] 00:07:47.133 }' 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.133 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.392 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:47.392 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.392 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.392 [2024-11-25 15:34:45.914914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:47.392 [2024-11-25 15:34:45.915054] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:47.392 [2024-11-25 15:34:45.915100] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:47.392 [2024-11-25 15:34:45.915415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:47.392 [2024-11-25 15:34:45.915642] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:47.392 [2024-11-25 15:34:45.915695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:47.392 [2024-11-25 15:34:45.916013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.392 BaseBdev2 00:07:47.392 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.392 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:47.392 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:47.392 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:47.392 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:47.392 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:47.392 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:47.392 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:47.392 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.392 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.393 [ 00:07:47.393 { 00:07:47.393 "name": "BaseBdev2", 00:07:47.393 "aliases": [ 00:07:47.393 "063c6826-e993-470e-b8ee-3f9bba370b94" 00:07:47.393 ], 00:07:47.393 "product_name": "Malloc disk", 00:07:47.393 "block_size": 512, 00:07:47.393 "num_blocks": 65536, 00:07:47.393 "uuid": "063c6826-e993-470e-b8ee-3f9bba370b94", 00:07:47.393 "assigned_rate_limits": { 00:07:47.393 "rw_ios_per_sec": 0, 00:07:47.393 "rw_mbytes_per_sec": 0, 00:07:47.393 "r_mbytes_per_sec": 0, 00:07:47.393 "w_mbytes_per_sec": 0 00:07:47.393 }, 00:07:47.393 "claimed": true, 00:07:47.393 "claim_type": "exclusive_write", 00:07:47.393 "zoned": false, 00:07:47.393 "supported_io_types": { 00:07:47.393 "read": true, 00:07:47.393 "write": true, 00:07:47.393 "unmap": true, 00:07:47.393 "flush": true, 00:07:47.393 "reset": true, 00:07:47.393 "nvme_admin": false, 00:07:47.393 "nvme_io": false, 00:07:47.393 "nvme_io_md": false, 00:07:47.393 "write_zeroes": true, 00:07:47.393 "zcopy": true, 00:07:47.393 "get_zone_info": false, 00:07:47.393 "zone_management": false, 00:07:47.393 "zone_append": false, 00:07:47.393 "compare": false, 00:07:47.393 "compare_and_write": false, 00:07:47.393 "abort": true, 00:07:47.393 "seek_hole": false, 00:07:47.393 "seek_data": false, 00:07:47.393 "copy": true, 00:07:47.393 "nvme_iov_md": false 00:07:47.393 }, 00:07:47.393 "memory_domains": [ 00:07:47.393 { 00:07:47.393 "dma_device_id": "system", 00:07:47.393 "dma_device_type": 1 00:07:47.393 }, 00:07:47.393 { 00:07:47.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.393 "dma_device_type": 2 00:07:47.393 } 00:07:47.393 ], 00:07:47.393 "driver_specific": {} 00:07:47.393 } 00:07:47.393 ] 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.393 15:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.393 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.393 "name": "Existed_Raid", 00:07:47.393 "uuid": "eb5ad17b-492c-4201-a90f-30d081d31e15", 00:07:47.393 "strip_size_kb": 64, 00:07:47.393 "state": "online", 00:07:47.393 "raid_level": "raid0", 00:07:47.393 "superblock": false, 00:07:47.393 "num_base_bdevs": 2, 00:07:47.393 "num_base_bdevs_discovered": 2, 00:07:47.393 "num_base_bdevs_operational": 2, 00:07:47.393 "base_bdevs_list": [ 00:07:47.393 { 00:07:47.393 "name": "BaseBdev1", 00:07:47.393 "uuid": "edafc692-2588-4568-a94f-0903baa547e2", 00:07:47.393 "is_configured": true, 00:07:47.393 "data_offset": 0, 00:07:47.393 "data_size": 65536 00:07:47.393 }, 00:07:47.393 { 00:07:47.393 "name": "BaseBdev2", 00:07:47.393 "uuid": "063c6826-e993-470e-b8ee-3f9bba370b94", 00:07:47.393 "is_configured": true, 00:07:47.393 "data_offset": 0, 00:07:47.393 "data_size": 65536 00:07:47.393 } 00:07:47.393 ] 00:07:47.393 }' 00:07:47.393 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.393 15:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.961 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:47.961 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:47.961 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:47.961 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:47.961 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:47.961 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:47.961 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:47.961 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:47.961 15:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.961 15:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.961 [2024-11-25 15:34:46.370491] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.961 15:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.961 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:47.961 "name": "Existed_Raid", 00:07:47.961 "aliases": [ 00:07:47.961 "eb5ad17b-492c-4201-a90f-30d081d31e15" 00:07:47.961 ], 00:07:47.961 "product_name": "Raid Volume", 00:07:47.961 "block_size": 512, 00:07:47.961 "num_blocks": 131072, 00:07:47.961 "uuid": "eb5ad17b-492c-4201-a90f-30d081d31e15", 00:07:47.961 "assigned_rate_limits": { 00:07:47.961 "rw_ios_per_sec": 0, 00:07:47.961 "rw_mbytes_per_sec": 0, 00:07:47.961 "r_mbytes_per_sec": 0, 00:07:47.961 "w_mbytes_per_sec": 0 00:07:47.961 }, 00:07:47.961 "claimed": false, 00:07:47.961 "zoned": false, 00:07:47.961 "supported_io_types": { 00:07:47.961 "read": true, 00:07:47.961 "write": true, 00:07:47.961 "unmap": true, 00:07:47.961 "flush": true, 00:07:47.961 "reset": true, 00:07:47.961 "nvme_admin": false, 00:07:47.961 "nvme_io": false, 00:07:47.961 "nvme_io_md": false, 00:07:47.961 "write_zeroes": true, 00:07:47.961 "zcopy": false, 00:07:47.961 "get_zone_info": false, 00:07:47.961 "zone_management": false, 00:07:47.962 "zone_append": false, 00:07:47.962 "compare": false, 00:07:47.962 "compare_and_write": false, 00:07:47.962 "abort": false, 00:07:47.962 "seek_hole": false, 00:07:47.962 "seek_data": false, 00:07:47.962 "copy": false, 00:07:47.962 "nvme_iov_md": false 00:07:47.962 }, 00:07:47.962 "memory_domains": [ 00:07:47.962 { 00:07:47.962 "dma_device_id": "system", 00:07:47.962 "dma_device_type": 1 00:07:47.962 }, 00:07:47.962 { 00:07:47.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.962 "dma_device_type": 2 00:07:47.962 }, 00:07:47.962 { 00:07:47.962 "dma_device_id": "system", 00:07:47.962 "dma_device_type": 1 00:07:47.962 }, 00:07:47.962 { 00:07:47.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.962 "dma_device_type": 2 00:07:47.962 } 00:07:47.962 ], 00:07:47.962 "driver_specific": { 00:07:47.962 "raid": { 00:07:47.962 "uuid": "eb5ad17b-492c-4201-a90f-30d081d31e15", 00:07:47.962 "strip_size_kb": 64, 00:07:47.962 "state": "online", 00:07:47.962 "raid_level": "raid0", 00:07:47.962 "superblock": false, 00:07:47.962 "num_base_bdevs": 2, 00:07:47.962 "num_base_bdevs_discovered": 2, 00:07:47.962 "num_base_bdevs_operational": 2, 00:07:47.962 "base_bdevs_list": [ 00:07:47.962 { 00:07:47.962 "name": "BaseBdev1", 00:07:47.962 "uuid": "edafc692-2588-4568-a94f-0903baa547e2", 00:07:47.962 "is_configured": true, 00:07:47.962 "data_offset": 0, 00:07:47.962 "data_size": 65536 00:07:47.962 }, 00:07:47.962 { 00:07:47.962 "name": "BaseBdev2", 00:07:47.962 "uuid": "063c6826-e993-470e-b8ee-3f9bba370b94", 00:07:47.962 "is_configured": true, 00:07:47.962 "data_offset": 0, 00:07:47.962 "data_size": 65536 00:07:47.962 } 00:07:47.962 ] 00:07:47.962 } 00:07:47.962 } 00:07:47.962 }' 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:47.962 BaseBdev2' 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.962 15:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.962 [2024-11-25 15:34:46.565864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:47.962 [2024-11-25 15:34:46.566004] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.962 [2024-11-25 15:34:46.566078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.221 "name": "Existed_Raid", 00:07:48.221 "uuid": "eb5ad17b-492c-4201-a90f-30d081d31e15", 00:07:48.221 "strip_size_kb": 64, 00:07:48.221 "state": "offline", 00:07:48.221 "raid_level": "raid0", 00:07:48.221 "superblock": false, 00:07:48.221 "num_base_bdevs": 2, 00:07:48.221 "num_base_bdevs_discovered": 1, 00:07:48.221 "num_base_bdevs_operational": 1, 00:07:48.221 "base_bdevs_list": [ 00:07:48.221 { 00:07:48.221 "name": null, 00:07:48.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.221 "is_configured": false, 00:07:48.221 "data_offset": 0, 00:07:48.221 "data_size": 65536 00:07:48.221 }, 00:07:48.221 { 00:07:48.221 "name": "BaseBdev2", 00:07:48.221 "uuid": "063c6826-e993-470e-b8ee-3f9bba370b94", 00:07:48.221 "is_configured": true, 00:07:48.221 "data_offset": 0, 00:07:48.221 "data_size": 65536 00:07:48.221 } 00:07:48.221 ] 00:07:48.221 }' 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.221 15:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.480 15:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:48.480 15:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.480 15:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:48.480 15:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.480 15:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.480 15:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.480 15:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.480 15:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:48.480 15:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:48.480 15:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:48.480 15:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.480 15:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.480 [2024-11-25 15:34:47.144177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:48.480 [2024-11-25 15:34:47.144230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60527 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60527 ']' 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60527 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60527 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60527' 00:07:48.739 killing process with pid 60527 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60527 00:07:48.739 [2024-11-25 15:34:47.334206] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.739 15:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60527 00:07:48.739 [2024-11-25 15:34:47.350474] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:50.116 00:07:50.116 real 0m4.779s 00:07:50.116 user 0m6.895s 00:07:50.116 sys 0m0.765s 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.116 ************************************ 00:07:50.116 END TEST raid_state_function_test 00:07:50.116 ************************************ 00:07:50.116 15:34:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:50.116 15:34:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:50.116 15:34:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.116 15:34:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:50.116 ************************************ 00:07:50.116 START TEST raid_state_function_test_sb 00:07:50.116 ************************************ 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60769 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60769' 00:07:50.116 Process raid pid: 60769 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60769 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60769 ']' 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.116 15:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.116 [2024-11-25 15:34:48.556133] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:07:50.116 [2024-11-25 15:34:48.556299] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.116 [2024-11-25 15:34:48.729779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.376 [2024-11-25 15:34:48.841714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.376 [2024-11-25 15:34:49.027520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.376 [2024-11-25 15:34:49.027636] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.944 [2024-11-25 15:34:49.375523] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.944 [2024-11-25 15:34:49.375636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.944 [2024-11-25 15:34:49.375668] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.944 [2024-11-25 15:34:49.375690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.944 "name": "Existed_Raid", 00:07:50.944 "uuid": "0b9c29a2-f9a7-445a-8c06-8463001bacf0", 00:07:50.944 "strip_size_kb": 64, 00:07:50.944 "state": "configuring", 00:07:50.944 "raid_level": "raid0", 00:07:50.944 "superblock": true, 00:07:50.944 "num_base_bdevs": 2, 00:07:50.944 "num_base_bdevs_discovered": 0, 00:07:50.944 "num_base_bdevs_operational": 2, 00:07:50.944 "base_bdevs_list": [ 00:07:50.944 { 00:07:50.944 "name": "BaseBdev1", 00:07:50.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.944 "is_configured": false, 00:07:50.944 "data_offset": 0, 00:07:50.944 "data_size": 0 00:07:50.944 }, 00:07:50.944 { 00:07:50.944 "name": "BaseBdev2", 00:07:50.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.944 "is_configured": false, 00:07:50.944 "data_offset": 0, 00:07:50.944 "data_size": 0 00:07:50.944 } 00:07:50.944 ] 00:07:50.944 }' 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.944 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.205 [2024-11-25 15:34:49.750835] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.205 [2024-11-25 15:34:49.750912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.205 [2024-11-25 15:34:49.762834] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.205 [2024-11-25 15:34:49.762879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.205 [2024-11-25 15:34:49.762888] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.205 [2024-11-25 15:34:49.762900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.205 [2024-11-25 15:34:49.810158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.205 BaseBdev1 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.205 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.205 [ 00:07:51.205 { 00:07:51.205 "name": "BaseBdev1", 00:07:51.205 "aliases": [ 00:07:51.205 "475f16e6-a581-4cdb-a364-5dabd8b20833" 00:07:51.205 ], 00:07:51.205 "product_name": "Malloc disk", 00:07:51.205 "block_size": 512, 00:07:51.205 "num_blocks": 65536, 00:07:51.205 "uuid": "475f16e6-a581-4cdb-a364-5dabd8b20833", 00:07:51.205 "assigned_rate_limits": { 00:07:51.205 "rw_ios_per_sec": 0, 00:07:51.205 "rw_mbytes_per_sec": 0, 00:07:51.205 "r_mbytes_per_sec": 0, 00:07:51.205 "w_mbytes_per_sec": 0 00:07:51.205 }, 00:07:51.205 "claimed": true, 00:07:51.205 "claim_type": "exclusive_write", 00:07:51.205 "zoned": false, 00:07:51.205 "supported_io_types": { 00:07:51.205 "read": true, 00:07:51.205 "write": true, 00:07:51.205 "unmap": true, 00:07:51.205 "flush": true, 00:07:51.205 "reset": true, 00:07:51.205 "nvme_admin": false, 00:07:51.205 "nvme_io": false, 00:07:51.205 "nvme_io_md": false, 00:07:51.205 "write_zeroes": true, 00:07:51.205 "zcopy": true, 00:07:51.205 "get_zone_info": false, 00:07:51.205 "zone_management": false, 00:07:51.205 "zone_append": false, 00:07:51.205 "compare": false, 00:07:51.205 "compare_and_write": false, 00:07:51.205 "abort": true, 00:07:51.205 "seek_hole": false, 00:07:51.205 "seek_data": false, 00:07:51.205 "copy": true, 00:07:51.205 "nvme_iov_md": false 00:07:51.205 }, 00:07:51.205 "memory_domains": [ 00:07:51.205 { 00:07:51.205 "dma_device_id": "system", 00:07:51.205 "dma_device_type": 1 00:07:51.205 }, 00:07:51.205 { 00:07:51.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.205 "dma_device_type": 2 00:07:51.205 } 00:07:51.206 ], 00:07:51.206 "driver_specific": {} 00:07:51.206 } 00:07:51.206 ] 00:07:51.206 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.206 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:51.206 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:51.206 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.206 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.206 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.206 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.206 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.206 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.206 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.206 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.206 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.206 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.206 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.206 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.206 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.206 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.467 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.467 "name": "Existed_Raid", 00:07:51.467 "uuid": "6a6317f7-4d46-4ecf-97ca-7c37708a41e3", 00:07:51.467 "strip_size_kb": 64, 00:07:51.467 "state": "configuring", 00:07:51.467 "raid_level": "raid0", 00:07:51.467 "superblock": true, 00:07:51.467 "num_base_bdevs": 2, 00:07:51.467 "num_base_bdevs_discovered": 1, 00:07:51.467 "num_base_bdevs_operational": 2, 00:07:51.467 "base_bdevs_list": [ 00:07:51.467 { 00:07:51.467 "name": "BaseBdev1", 00:07:51.467 "uuid": "475f16e6-a581-4cdb-a364-5dabd8b20833", 00:07:51.467 "is_configured": true, 00:07:51.467 "data_offset": 2048, 00:07:51.467 "data_size": 63488 00:07:51.467 }, 00:07:51.467 { 00:07:51.467 "name": "BaseBdev2", 00:07:51.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.467 "is_configured": false, 00:07:51.467 "data_offset": 0, 00:07:51.467 "data_size": 0 00:07:51.467 } 00:07:51.467 ] 00:07:51.467 }' 00:07:51.467 15:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.467 15:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.726 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.726 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.726 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.726 [2024-11-25 15:34:50.285389] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.726 [2024-11-25 15:34:50.285510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:51.726 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.726 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.726 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.726 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.726 [2024-11-25 15:34:50.297424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.726 [2024-11-25 15:34:50.299281] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.726 [2024-11-25 15:34:50.299327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.726 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.726 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:51.726 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.726 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:51.726 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.726 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.726 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.726 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.727 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.727 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.727 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.727 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.727 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.727 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.727 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.727 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.727 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.727 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.727 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.727 "name": "Existed_Raid", 00:07:51.727 "uuid": "0e62dde7-5217-40eb-b3a5-391c0966968a", 00:07:51.727 "strip_size_kb": 64, 00:07:51.727 "state": "configuring", 00:07:51.727 "raid_level": "raid0", 00:07:51.727 "superblock": true, 00:07:51.727 "num_base_bdevs": 2, 00:07:51.727 "num_base_bdevs_discovered": 1, 00:07:51.727 "num_base_bdevs_operational": 2, 00:07:51.727 "base_bdevs_list": [ 00:07:51.727 { 00:07:51.727 "name": "BaseBdev1", 00:07:51.727 "uuid": "475f16e6-a581-4cdb-a364-5dabd8b20833", 00:07:51.727 "is_configured": true, 00:07:51.727 "data_offset": 2048, 00:07:51.727 "data_size": 63488 00:07:51.727 }, 00:07:51.727 { 00:07:51.727 "name": "BaseBdev2", 00:07:51.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.727 "is_configured": false, 00:07:51.727 "data_offset": 0, 00:07:51.727 "data_size": 0 00:07:51.727 } 00:07:51.727 ] 00:07:51.727 }' 00:07:51.727 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.727 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.296 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:52.296 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.296 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.296 [2024-11-25 15:34:50.714676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:52.296 [2024-11-25 15:34:50.715115] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:52.296 [2024-11-25 15:34:50.715181] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:52.296 [2024-11-25 15:34:50.715485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:52.296 [2024-11-25 15:34:50.715697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:52.296 [2024-11-25 15:34:50.715749] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:52.296 BaseBdev2 00:07:52.296 [2024-11-25 15:34:50.715961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.296 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.296 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:52.296 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:52.296 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:52.296 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:52.296 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:52.296 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.297 [ 00:07:52.297 { 00:07:52.297 "name": "BaseBdev2", 00:07:52.297 "aliases": [ 00:07:52.297 "e376f16c-de1d-4ae4-8389-5b22004d1768" 00:07:52.297 ], 00:07:52.297 "product_name": "Malloc disk", 00:07:52.297 "block_size": 512, 00:07:52.297 "num_blocks": 65536, 00:07:52.297 "uuid": "e376f16c-de1d-4ae4-8389-5b22004d1768", 00:07:52.297 "assigned_rate_limits": { 00:07:52.297 "rw_ios_per_sec": 0, 00:07:52.297 "rw_mbytes_per_sec": 0, 00:07:52.297 "r_mbytes_per_sec": 0, 00:07:52.297 "w_mbytes_per_sec": 0 00:07:52.297 }, 00:07:52.297 "claimed": true, 00:07:52.297 "claim_type": "exclusive_write", 00:07:52.297 "zoned": false, 00:07:52.297 "supported_io_types": { 00:07:52.297 "read": true, 00:07:52.297 "write": true, 00:07:52.297 "unmap": true, 00:07:52.297 "flush": true, 00:07:52.297 "reset": true, 00:07:52.297 "nvme_admin": false, 00:07:52.297 "nvme_io": false, 00:07:52.297 "nvme_io_md": false, 00:07:52.297 "write_zeroes": true, 00:07:52.297 "zcopy": true, 00:07:52.297 "get_zone_info": false, 00:07:52.297 "zone_management": false, 00:07:52.297 "zone_append": false, 00:07:52.297 "compare": false, 00:07:52.297 "compare_and_write": false, 00:07:52.297 "abort": true, 00:07:52.297 "seek_hole": false, 00:07:52.297 "seek_data": false, 00:07:52.297 "copy": true, 00:07:52.297 "nvme_iov_md": false 00:07:52.297 }, 00:07:52.297 "memory_domains": [ 00:07:52.297 { 00:07:52.297 "dma_device_id": "system", 00:07:52.297 "dma_device_type": 1 00:07:52.297 }, 00:07:52.297 { 00:07:52.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.297 "dma_device_type": 2 00:07:52.297 } 00:07:52.297 ], 00:07:52.297 "driver_specific": {} 00:07:52.297 } 00:07:52.297 ] 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.297 "name": "Existed_Raid", 00:07:52.297 "uuid": "0e62dde7-5217-40eb-b3a5-391c0966968a", 00:07:52.297 "strip_size_kb": 64, 00:07:52.297 "state": "online", 00:07:52.297 "raid_level": "raid0", 00:07:52.297 "superblock": true, 00:07:52.297 "num_base_bdevs": 2, 00:07:52.297 "num_base_bdevs_discovered": 2, 00:07:52.297 "num_base_bdevs_operational": 2, 00:07:52.297 "base_bdevs_list": [ 00:07:52.297 { 00:07:52.297 "name": "BaseBdev1", 00:07:52.297 "uuid": "475f16e6-a581-4cdb-a364-5dabd8b20833", 00:07:52.297 "is_configured": true, 00:07:52.297 "data_offset": 2048, 00:07:52.297 "data_size": 63488 00:07:52.297 }, 00:07:52.297 { 00:07:52.297 "name": "BaseBdev2", 00:07:52.297 "uuid": "e376f16c-de1d-4ae4-8389-5b22004d1768", 00:07:52.297 "is_configured": true, 00:07:52.297 "data_offset": 2048, 00:07:52.297 "data_size": 63488 00:07:52.297 } 00:07:52.297 ] 00:07:52.297 }' 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.297 15:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.557 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:52.558 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:52.558 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:52.558 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:52.558 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:52.558 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:52.558 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:52.558 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:52.558 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.558 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.558 [2024-11-25 15:34:51.138267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.558 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.558 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:52.558 "name": "Existed_Raid", 00:07:52.558 "aliases": [ 00:07:52.558 "0e62dde7-5217-40eb-b3a5-391c0966968a" 00:07:52.558 ], 00:07:52.558 "product_name": "Raid Volume", 00:07:52.558 "block_size": 512, 00:07:52.558 "num_blocks": 126976, 00:07:52.558 "uuid": "0e62dde7-5217-40eb-b3a5-391c0966968a", 00:07:52.558 "assigned_rate_limits": { 00:07:52.558 "rw_ios_per_sec": 0, 00:07:52.558 "rw_mbytes_per_sec": 0, 00:07:52.558 "r_mbytes_per_sec": 0, 00:07:52.558 "w_mbytes_per_sec": 0 00:07:52.558 }, 00:07:52.558 "claimed": false, 00:07:52.558 "zoned": false, 00:07:52.558 "supported_io_types": { 00:07:52.558 "read": true, 00:07:52.558 "write": true, 00:07:52.558 "unmap": true, 00:07:52.558 "flush": true, 00:07:52.558 "reset": true, 00:07:52.558 "nvme_admin": false, 00:07:52.558 "nvme_io": false, 00:07:52.558 "nvme_io_md": false, 00:07:52.558 "write_zeroes": true, 00:07:52.558 "zcopy": false, 00:07:52.558 "get_zone_info": false, 00:07:52.558 "zone_management": false, 00:07:52.558 "zone_append": false, 00:07:52.558 "compare": false, 00:07:52.558 "compare_and_write": false, 00:07:52.558 "abort": false, 00:07:52.558 "seek_hole": false, 00:07:52.558 "seek_data": false, 00:07:52.558 "copy": false, 00:07:52.558 "nvme_iov_md": false 00:07:52.558 }, 00:07:52.558 "memory_domains": [ 00:07:52.558 { 00:07:52.558 "dma_device_id": "system", 00:07:52.558 "dma_device_type": 1 00:07:52.558 }, 00:07:52.558 { 00:07:52.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.558 "dma_device_type": 2 00:07:52.558 }, 00:07:52.558 { 00:07:52.558 "dma_device_id": "system", 00:07:52.558 "dma_device_type": 1 00:07:52.558 }, 00:07:52.558 { 00:07:52.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.558 "dma_device_type": 2 00:07:52.558 } 00:07:52.558 ], 00:07:52.558 "driver_specific": { 00:07:52.558 "raid": { 00:07:52.558 "uuid": "0e62dde7-5217-40eb-b3a5-391c0966968a", 00:07:52.558 "strip_size_kb": 64, 00:07:52.558 "state": "online", 00:07:52.558 "raid_level": "raid0", 00:07:52.558 "superblock": true, 00:07:52.558 "num_base_bdevs": 2, 00:07:52.558 "num_base_bdevs_discovered": 2, 00:07:52.558 "num_base_bdevs_operational": 2, 00:07:52.558 "base_bdevs_list": [ 00:07:52.558 { 00:07:52.558 "name": "BaseBdev1", 00:07:52.558 "uuid": "475f16e6-a581-4cdb-a364-5dabd8b20833", 00:07:52.558 "is_configured": true, 00:07:52.558 "data_offset": 2048, 00:07:52.558 "data_size": 63488 00:07:52.558 }, 00:07:52.558 { 00:07:52.558 "name": "BaseBdev2", 00:07:52.558 "uuid": "e376f16c-de1d-4ae4-8389-5b22004d1768", 00:07:52.558 "is_configured": true, 00:07:52.558 "data_offset": 2048, 00:07:52.558 "data_size": 63488 00:07:52.558 } 00:07:52.558 ] 00:07:52.558 } 00:07:52.558 } 00:07:52.558 }' 00:07:52.558 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:52.558 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:52.558 BaseBdev2' 00:07:52.558 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.818 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:52.818 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.818 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:52.818 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.818 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.818 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.818 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.818 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.818 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.818 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.818 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:52.818 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.818 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.818 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.819 [2024-11-25 15:34:51.357630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:52.819 [2024-11-25 15:34:51.357664] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.819 [2024-11-25 15:34:51.357713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.819 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.078 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.079 "name": "Existed_Raid", 00:07:53.079 "uuid": "0e62dde7-5217-40eb-b3a5-391c0966968a", 00:07:53.079 "strip_size_kb": 64, 00:07:53.079 "state": "offline", 00:07:53.079 "raid_level": "raid0", 00:07:53.079 "superblock": true, 00:07:53.079 "num_base_bdevs": 2, 00:07:53.079 "num_base_bdevs_discovered": 1, 00:07:53.079 "num_base_bdevs_operational": 1, 00:07:53.079 "base_bdevs_list": [ 00:07:53.079 { 00:07:53.079 "name": null, 00:07:53.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.079 "is_configured": false, 00:07:53.079 "data_offset": 0, 00:07:53.079 "data_size": 63488 00:07:53.079 }, 00:07:53.079 { 00:07:53.079 "name": "BaseBdev2", 00:07:53.079 "uuid": "e376f16c-de1d-4ae4-8389-5b22004d1768", 00:07:53.079 "is_configured": true, 00:07:53.079 "data_offset": 2048, 00:07:53.079 "data_size": 63488 00:07:53.079 } 00:07:53.079 ] 00:07:53.079 }' 00:07:53.079 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.079 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.338 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:53.338 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.338 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.338 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.338 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.338 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:53.338 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.338 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:53.338 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:53.338 15:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:53.338 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.338 15:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.338 [2024-11-25 15:34:51.968338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:53.338 [2024-11-25 15:34:51.968391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60769 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60769 ']' 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60769 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60769 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60769' 00:07:53.597 killing process with pid 60769 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60769 00:07:53.597 [2024-11-25 15:34:52.144953] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.597 15:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60769 00:07:53.597 [2024-11-25 15:34:52.160826] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.536 15:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:54.536 00:07:54.536 real 0m4.746s 00:07:54.536 user 0m6.832s 00:07:54.536 sys 0m0.739s 00:07:54.536 15:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.536 15:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.536 ************************************ 00:07:54.536 END TEST raid_state_function_test_sb 00:07:54.536 ************************************ 00:07:54.796 15:34:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:54.796 15:34:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:54.796 15:34:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.796 15:34:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.796 ************************************ 00:07:54.796 START TEST raid_superblock_test 00:07:54.796 ************************************ 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61021 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61021 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61021 ']' 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.796 15:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.797 15:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.797 15:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.797 [2024-11-25 15:34:53.365972] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:07:54.797 [2024-11-25 15:34:53.366180] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61021 ] 00:07:55.056 [2024-11-25 15:34:53.541451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.056 [2024-11-25 15:34:53.652483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.316 [2024-11-25 15:34:53.849182] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.316 [2024-11-25 15:34:53.849212] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.576 malloc1 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.576 [2024-11-25 15:34:54.244997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:55.576 [2024-11-25 15:34:54.245143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.576 [2024-11-25 15:34:54.245208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:55.576 [2024-11-25 15:34:54.245246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.576 [2024-11-25 15:34:54.247308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.576 [2024-11-25 15:34:54.247386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:55.576 pt1 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.576 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.836 malloc2 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.836 [2024-11-25 15:34:54.302748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:55.836 [2024-11-25 15:34:54.302850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.836 [2024-11-25 15:34:54.302913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:55.836 [2024-11-25 15:34:54.302930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.836 [2024-11-25 15:34:54.304939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.836 [2024-11-25 15:34:54.304988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:55.836 pt2 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.836 [2024-11-25 15:34:54.314788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:55.836 [2024-11-25 15:34:54.316546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:55.836 [2024-11-25 15:34:54.316698] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:55.836 [2024-11-25 15:34:54.316710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:55.836 [2024-11-25 15:34:54.316931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:55.836 [2024-11-25 15:34:54.317105] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:55.836 [2024-11-25 15:34:54.317117] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:55.836 [2024-11-25 15:34:54.317266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.836 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.837 "name": "raid_bdev1", 00:07:55.837 "uuid": "97c3395a-c701-4210-8b09-a8fd20516d38", 00:07:55.837 "strip_size_kb": 64, 00:07:55.837 "state": "online", 00:07:55.837 "raid_level": "raid0", 00:07:55.837 "superblock": true, 00:07:55.837 "num_base_bdevs": 2, 00:07:55.837 "num_base_bdevs_discovered": 2, 00:07:55.837 "num_base_bdevs_operational": 2, 00:07:55.837 "base_bdevs_list": [ 00:07:55.837 { 00:07:55.837 "name": "pt1", 00:07:55.837 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.837 "is_configured": true, 00:07:55.837 "data_offset": 2048, 00:07:55.837 "data_size": 63488 00:07:55.837 }, 00:07:55.837 { 00:07:55.837 "name": "pt2", 00:07:55.837 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.837 "is_configured": true, 00:07:55.837 "data_offset": 2048, 00:07:55.837 "data_size": 63488 00:07:55.837 } 00:07:55.837 ] 00:07:55.837 }' 00:07:55.837 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.837 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.096 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:56.096 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:56.096 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.096 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.096 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.096 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.096 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.096 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.096 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.096 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.096 [2024-11-25 15:34:54.702434] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.096 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.096 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.096 "name": "raid_bdev1", 00:07:56.096 "aliases": [ 00:07:56.097 "97c3395a-c701-4210-8b09-a8fd20516d38" 00:07:56.097 ], 00:07:56.097 "product_name": "Raid Volume", 00:07:56.097 "block_size": 512, 00:07:56.097 "num_blocks": 126976, 00:07:56.097 "uuid": "97c3395a-c701-4210-8b09-a8fd20516d38", 00:07:56.097 "assigned_rate_limits": { 00:07:56.097 "rw_ios_per_sec": 0, 00:07:56.097 "rw_mbytes_per_sec": 0, 00:07:56.097 "r_mbytes_per_sec": 0, 00:07:56.097 "w_mbytes_per_sec": 0 00:07:56.097 }, 00:07:56.097 "claimed": false, 00:07:56.097 "zoned": false, 00:07:56.097 "supported_io_types": { 00:07:56.097 "read": true, 00:07:56.097 "write": true, 00:07:56.097 "unmap": true, 00:07:56.097 "flush": true, 00:07:56.097 "reset": true, 00:07:56.097 "nvme_admin": false, 00:07:56.097 "nvme_io": false, 00:07:56.097 "nvme_io_md": false, 00:07:56.097 "write_zeroes": true, 00:07:56.097 "zcopy": false, 00:07:56.097 "get_zone_info": false, 00:07:56.097 "zone_management": false, 00:07:56.097 "zone_append": false, 00:07:56.097 "compare": false, 00:07:56.097 "compare_and_write": false, 00:07:56.097 "abort": false, 00:07:56.097 "seek_hole": false, 00:07:56.097 "seek_data": false, 00:07:56.097 "copy": false, 00:07:56.097 "nvme_iov_md": false 00:07:56.097 }, 00:07:56.097 "memory_domains": [ 00:07:56.097 { 00:07:56.097 "dma_device_id": "system", 00:07:56.097 "dma_device_type": 1 00:07:56.097 }, 00:07:56.097 { 00:07:56.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.097 "dma_device_type": 2 00:07:56.097 }, 00:07:56.097 { 00:07:56.097 "dma_device_id": "system", 00:07:56.097 "dma_device_type": 1 00:07:56.097 }, 00:07:56.097 { 00:07:56.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.097 "dma_device_type": 2 00:07:56.097 } 00:07:56.097 ], 00:07:56.097 "driver_specific": { 00:07:56.097 "raid": { 00:07:56.097 "uuid": "97c3395a-c701-4210-8b09-a8fd20516d38", 00:07:56.097 "strip_size_kb": 64, 00:07:56.097 "state": "online", 00:07:56.097 "raid_level": "raid0", 00:07:56.097 "superblock": true, 00:07:56.097 "num_base_bdevs": 2, 00:07:56.097 "num_base_bdevs_discovered": 2, 00:07:56.097 "num_base_bdevs_operational": 2, 00:07:56.097 "base_bdevs_list": [ 00:07:56.097 { 00:07:56.097 "name": "pt1", 00:07:56.097 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.097 "is_configured": true, 00:07:56.097 "data_offset": 2048, 00:07:56.097 "data_size": 63488 00:07:56.097 }, 00:07:56.097 { 00:07:56.097 "name": "pt2", 00:07:56.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.097 "is_configured": true, 00:07:56.097 "data_offset": 2048, 00:07:56.097 "data_size": 63488 00:07:56.097 } 00:07:56.097 ] 00:07:56.097 } 00:07:56.097 } 00:07:56.097 }' 00:07:56.097 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:56.097 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:56.097 pt2' 00:07:56.097 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.357 [2024-11-25 15:34:54.902000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=97c3395a-c701-4210-8b09-a8fd20516d38 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 97c3395a-c701-4210-8b09-a8fd20516d38 ']' 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.357 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.357 [2024-11-25 15:34:54.949633] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.357 [2024-11-25 15:34:54.949714] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.357 [2024-11-25 15:34:54.949808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.357 [2024-11-25 15:34:54.949859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.358 [2024-11-25 15:34:54.949873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:56.358 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.358 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.358 15:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:56.358 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.358 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.358 15:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.358 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:56.358 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:56.358 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:56.358 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:56.358 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.358 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.358 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.358 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:56.358 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:56.358 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.358 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.358 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.358 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:56.358 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.358 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.358 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:56.618 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.618 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:56.618 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:56.618 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:56.618 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:56.618 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:56.618 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.619 [2024-11-25 15:34:55.073444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:56.619 [2024-11-25 15:34:55.075289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:56.619 [2024-11-25 15:34:55.075349] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:56.619 [2024-11-25 15:34:55.075397] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:56.619 [2024-11-25 15:34:55.075411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.619 [2024-11-25 15:34:55.075423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:56.619 request: 00:07:56.619 { 00:07:56.619 "name": "raid_bdev1", 00:07:56.619 "raid_level": "raid0", 00:07:56.619 "base_bdevs": [ 00:07:56.619 "malloc1", 00:07:56.619 "malloc2" 00:07:56.619 ], 00:07:56.619 "strip_size_kb": 64, 00:07:56.619 "superblock": false, 00:07:56.619 "method": "bdev_raid_create", 00:07:56.619 "req_id": 1 00:07:56.619 } 00:07:56.619 Got JSON-RPC error response 00:07:56.619 response: 00:07:56.619 { 00:07:56.619 "code": -17, 00:07:56.619 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:56.619 } 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.619 [2024-11-25 15:34:55.125330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:56.619 [2024-11-25 15:34:55.125427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.619 [2024-11-25 15:34:55.125483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:56.619 [2024-11-25 15:34:55.125525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.619 [2024-11-25 15:34:55.127680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.619 [2024-11-25 15:34:55.127761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:56.619 [2024-11-25 15:34:55.127880] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:56.619 [2024-11-25 15:34:55.127981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:56.619 pt1 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.619 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.619 "name": "raid_bdev1", 00:07:56.619 "uuid": "97c3395a-c701-4210-8b09-a8fd20516d38", 00:07:56.619 "strip_size_kb": 64, 00:07:56.619 "state": "configuring", 00:07:56.619 "raid_level": "raid0", 00:07:56.619 "superblock": true, 00:07:56.619 "num_base_bdevs": 2, 00:07:56.619 "num_base_bdevs_discovered": 1, 00:07:56.619 "num_base_bdevs_operational": 2, 00:07:56.619 "base_bdevs_list": [ 00:07:56.619 { 00:07:56.619 "name": "pt1", 00:07:56.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.619 "is_configured": true, 00:07:56.619 "data_offset": 2048, 00:07:56.619 "data_size": 63488 00:07:56.619 }, 00:07:56.619 { 00:07:56.619 "name": null, 00:07:56.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.619 "is_configured": false, 00:07:56.619 "data_offset": 2048, 00:07:56.619 "data_size": 63488 00:07:56.620 } 00:07:56.620 ] 00:07:56.620 }' 00:07:56.620 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.620 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:56.879 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:56.879 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:56.879 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:56.879 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.879 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 [2024-11-25 15:34:55.552643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:56.879 [2024-11-25 15:34:55.552711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.879 [2024-11-25 15:34:55.552733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:56.879 [2024-11-25 15:34:55.552744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.879 [2024-11-25 15:34:55.553234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.879 [2024-11-25 15:34:55.553328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:56.879 [2024-11-25 15:34:55.553431] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:56.879 [2024-11-25 15:34:55.553460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:56.879 [2024-11-25 15:34:55.553587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:56.879 [2024-11-25 15:34:55.553599] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:56.879 [2024-11-25 15:34:55.553833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:56.879 [2024-11-25 15:34:55.553974] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:56.879 [2024-11-25 15:34:55.553984] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:56.879 [2024-11-25 15:34:55.554139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.879 pt2 00:07:56.879 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.879 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:56.879 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:56.879 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:56.879 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.879 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.879 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.879 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.879 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.879 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.879 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.879 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.879 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.140 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.140 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.140 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.140 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.140 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.140 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.140 "name": "raid_bdev1", 00:07:57.140 "uuid": "97c3395a-c701-4210-8b09-a8fd20516d38", 00:07:57.140 "strip_size_kb": 64, 00:07:57.140 "state": "online", 00:07:57.140 "raid_level": "raid0", 00:07:57.140 "superblock": true, 00:07:57.140 "num_base_bdevs": 2, 00:07:57.140 "num_base_bdevs_discovered": 2, 00:07:57.140 "num_base_bdevs_operational": 2, 00:07:57.140 "base_bdevs_list": [ 00:07:57.140 { 00:07:57.140 "name": "pt1", 00:07:57.140 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.140 "is_configured": true, 00:07:57.140 "data_offset": 2048, 00:07:57.140 "data_size": 63488 00:07:57.140 }, 00:07:57.140 { 00:07:57.140 "name": "pt2", 00:07:57.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.140 "is_configured": true, 00:07:57.140 "data_offset": 2048, 00:07:57.140 "data_size": 63488 00:07:57.140 } 00:07:57.140 ] 00:07:57.140 }' 00:07:57.140 15:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.140 15:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.399 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:57.399 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:57.399 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:57.399 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:57.399 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:57.399 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:57.399 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:57.399 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:57.399 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.399 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.399 [2024-11-25 15:34:56.012113] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.400 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.400 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:57.400 "name": "raid_bdev1", 00:07:57.400 "aliases": [ 00:07:57.400 "97c3395a-c701-4210-8b09-a8fd20516d38" 00:07:57.400 ], 00:07:57.400 "product_name": "Raid Volume", 00:07:57.400 "block_size": 512, 00:07:57.400 "num_blocks": 126976, 00:07:57.400 "uuid": "97c3395a-c701-4210-8b09-a8fd20516d38", 00:07:57.400 "assigned_rate_limits": { 00:07:57.400 "rw_ios_per_sec": 0, 00:07:57.400 "rw_mbytes_per_sec": 0, 00:07:57.400 "r_mbytes_per_sec": 0, 00:07:57.400 "w_mbytes_per_sec": 0 00:07:57.400 }, 00:07:57.400 "claimed": false, 00:07:57.400 "zoned": false, 00:07:57.400 "supported_io_types": { 00:07:57.400 "read": true, 00:07:57.400 "write": true, 00:07:57.400 "unmap": true, 00:07:57.400 "flush": true, 00:07:57.400 "reset": true, 00:07:57.400 "nvme_admin": false, 00:07:57.400 "nvme_io": false, 00:07:57.400 "nvme_io_md": false, 00:07:57.400 "write_zeroes": true, 00:07:57.400 "zcopy": false, 00:07:57.400 "get_zone_info": false, 00:07:57.400 "zone_management": false, 00:07:57.400 "zone_append": false, 00:07:57.400 "compare": false, 00:07:57.400 "compare_and_write": false, 00:07:57.400 "abort": false, 00:07:57.400 "seek_hole": false, 00:07:57.400 "seek_data": false, 00:07:57.400 "copy": false, 00:07:57.400 "nvme_iov_md": false 00:07:57.400 }, 00:07:57.400 "memory_domains": [ 00:07:57.400 { 00:07:57.400 "dma_device_id": "system", 00:07:57.400 "dma_device_type": 1 00:07:57.400 }, 00:07:57.400 { 00:07:57.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.400 "dma_device_type": 2 00:07:57.400 }, 00:07:57.400 { 00:07:57.400 "dma_device_id": "system", 00:07:57.400 "dma_device_type": 1 00:07:57.400 }, 00:07:57.400 { 00:07:57.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.400 "dma_device_type": 2 00:07:57.400 } 00:07:57.400 ], 00:07:57.400 "driver_specific": { 00:07:57.400 "raid": { 00:07:57.400 "uuid": "97c3395a-c701-4210-8b09-a8fd20516d38", 00:07:57.400 "strip_size_kb": 64, 00:07:57.400 "state": "online", 00:07:57.400 "raid_level": "raid0", 00:07:57.400 "superblock": true, 00:07:57.400 "num_base_bdevs": 2, 00:07:57.400 "num_base_bdevs_discovered": 2, 00:07:57.400 "num_base_bdevs_operational": 2, 00:07:57.400 "base_bdevs_list": [ 00:07:57.400 { 00:07:57.400 "name": "pt1", 00:07:57.400 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.400 "is_configured": true, 00:07:57.400 "data_offset": 2048, 00:07:57.400 "data_size": 63488 00:07:57.400 }, 00:07:57.400 { 00:07:57.400 "name": "pt2", 00:07:57.400 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.400 "is_configured": true, 00:07:57.400 "data_offset": 2048, 00:07:57.400 "data_size": 63488 00:07:57.400 } 00:07:57.400 ] 00:07:57.400 } 00:07:57.400 } 00:07:57.400 }' 00:07:57.400 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:57.400 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:57.400 pt2' 00:07:57.400 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.660 [2024-11-25 15:34:56.203741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 97c3395a-c701-4210-8b09-a8fd20516d38 '!=' 97c3395a-c701-4210-8b09-a8fd20516d38 ']' 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61021 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61021 ']' 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61021 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61021 00:07:57.660 killing process with pid 61021 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61021' 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61021 00:07:57.660 [2024-11-25 15:34:56.289592] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:57.660 [2024-11-25 15:34:56.289676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.660 [2024-11-25 15:34:56.289724] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.660 [2024-11-25 15:34:56.289735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:57.660 15:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61021 00:07:57.919 [2024-11-25 15:34:56.491428] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:59.299 15:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:59.299 00:07:59.299 real 0m4.267s 00:07:59.299 user 0m6.002s 00:07:59.299 sys 0m0.676s 00:07:59.299 15:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.299 ************************************ 00:07:59.299 END TEST raid_superblock_test 00:07:59.299 ************************************ 00:07:59.299 15:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.299 15:34:57 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:59.299 15:34:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:59.299 15:34:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.299 15:34:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:59.299 ************************************ 00:07:59.299 START TEST raid_read_error_test 00:07:59.299 ************************************ 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xi0qHfyrjM 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61227 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61227 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61227 ']' 00:07:59.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.299 15:34:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.299 [2024-11-25 15:34:57.716176] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:07:59.299 [2024-11-25 15:34:57.716318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61227 ] 00:07:59.299 [2024-11-25 15:34:57.890455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.559 [2024-11-25 15:34:58.002982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.559 [2024-11-25 15:34:58.200341] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.559 [2024-11-25 15:34:58.200368] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.135 BaseBdev1_malloc 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.135 true 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.135 [2024-11-25 15:34:58.592356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:00.135 [2024-11-25 15:34:58.592477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.135 [2024-11-25 15:34:58.592513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:00.135 [2024-11-25 15:34:58.592544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.135 [2024-11-25 15:34:58.594625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.135 [2024-11-25 15:34:58.594715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:00.135 BaseBdev1 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.135 BaseBdev2_malloc 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.135 true 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.135 [2024-11-25 15:34:58.657836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:00.135 [2024-11-25 15:34:58.657889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.135 [2024-11-25 15:34:58.657921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:00.135 [2024-11-25 15:34:58.657931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.135 [2024-11-25 15:34:58.659931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.135 [2024-11-25 15:34:58.660034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:00.135 BaseBdev2 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.135 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.136 [2024-11-25 15:34:58.669873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.136 [2024-11-25 15:34:58.671669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:00.136 [2024-11-25 15:34:58.671861] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:00.136 [2024-11-25 15:34:58.671877] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:00.136 [2024-11-25 15:34:58.672105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:00.136 [2024-11-25 15:34:58.672270] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:00.136 [2024-11-25 15:34:58.672282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:00.136 [2024-11-25 15:34:58.672440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.136 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.136 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:00.136 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.136 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.136 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.136 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.136 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.136 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.136 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.136 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.136 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.136 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.136 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.136 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.136 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.136 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.136 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.136 "name": "raid_bdev1", 00:08:00.136 "uuid": "22ba5f32-8684-4f29-be40-6724b595134d", 00:08:00.136 "strip_size_kb": 64, 00:08:00.136 "state": "online", 00:08:00.136 "raid_level": "raid0", 00:08:00.136 "superblock": true, 00:08:00.136 "num_base_bdevs": 2, 00:08:00.136 "num_base_bdevs_discovered": 2, 00:08:00.136 "num_base_bdevs_operational": 2, 00:08:00.136 "base_bdevs_list": [ 00:08:00.136 { 00:08:00.136 "name": "BaseBdev1", 00:08:00.136 "uuid": "42e556de-a48a-5089-954d-8825bd5fad75", 00:08:00.136 "is_configured": true, 00:08:00.136 "data_offset": 2048, 00:08:00.136 "data_size": 63488 00:08:00.136 }, 00:08:00.136 { 00:08:00.136 "name": "BaseBdev2", 00:08:00.136 "uuid": "b9f7caef-a8d8-5d56-83a1-5951fcd718df", 00:08:00.136 "is_configured": true, 00:08:00.136 "data_offset": 2048, 00:08:00.136 "data_size": 63488 00:08:00.136 } 00:08:00.136 ] 00:08:00.136 }' 00:08:00.136 15:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.136 15:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.715 15:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:00.715 15:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:00.715 [2024-11-25 15:34:59.202335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.653 "name": "raid_bdev1", 00:08:01.653 "uuid": "22ba5f32-8684-4f29-be40-6724b595134d", 00:08:01.653 "strip_size_kb": 64, 00:08:01.653 "state": "online", 00:08:01.653 "raid_level": "raid0", 00:08:01.653 "superblock": true, 00:08:01.653 "num_base_bdevs": 2, 00:08:01.653 "num_base_bdevs_discovered": 2, 00:08:01.653 "num_base_bdevs_operational": 2, 00:08:01.653 "base_bdevs_list": [ 00:08:01.653 { 00:08:01.653 "name": "BaseBdev1", 00:08:01.653 "uuid": "42e556de-a48a-5089-954d-8825bd5fad75", 00:08:01.653 "is_configured": true, 00:08:01.653 "data_offset": 2048, 00:08:01.653 "data_size": 63488 00:08:01.653 }, 00:08:01.653 { 00:08:01.653 "name": "BaseBdev2", 00:08:01.653 "uuid": "b9f7caef-a8d8-5d56-83a1-5951fcd718df", 00:08:01.653 "is_configured": true, 00:08:01.653 "data_offset": 2048, 00:08:01.653 "data_size": 63488 00:08:01.653 } 00:08:01.653 ] 00:08:01.653 }' 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.653 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.913 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:01.913 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.913 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.913 [2024-11-25 15:35:00.586051] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:01.913 [2024-11-25 15:35:00.586165] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:01.913 [2024-11-25 15:35:00.588906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:01.913 [2024-11-25 15:35:00.588993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.913 [2024-11-25 15:35:00.589090] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:01.913 [2024-11-25 15:35:00.589146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:01.913 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.913 { 00:08:01.913 "results": [ 00:08:01.913 { 00:08:01.913 "job": "raid_bdev1", 00:08:01.913 "core_mask": "0x1", 00:08:01.913 "workload": "randrw", 00:08:01.913 "percentage": 50, 00:08:01.913 "status": "finished", 00:08:01.913 "queue_depth": 1, 00:08:01.913 "io_size": 131072, 00:08:01.913 "runtime": 1.38485, 00:08:01.913 "iops": 16921.68826948767, 00:08:01.913 "mibps": 2115.2110336859587, 00:08:01.913 "io_failed": 1, 00:08:01.913 "io_timeout": 0, 00:08:01.913 "avg_latency_us": 82.08912619966216, 00:08:01.913 "min_latency_us": 24.482096069868994, 00:08:01.913 "max_latency_us": 1488.1537117903931 00:08:01.913 } 00:08:01.913 ], 00:08:01.913 "core_count": 1 00:08:01.913 } 00:08:01.913 15:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61227 00:08:01.913 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61227 ']' 00:08:01.913 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61227 00:08:02.173 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:02.173 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.173 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61227 00:08:02.173 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.173 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.173 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61227' 00:08:02.173 killing process with pid 61227 00:08:02.173 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61227 00:08:02.173 [2024-11-25 15:35:00.626789] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.173 15:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61227 00:08:02.173 [2024-11-25 15:35:00.758070] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:03.554 15:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:03.554 15:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xi0qHfyrjM 00:08:03.554 15:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:03.554 15:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:03.554 15:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:03.554 ************************************ 00:08:03.554 END TEST raid_read_error_test 00:08:03.554 ************************************ 00:08:03.554 15:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:03.554 15:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:03.554 15:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:03.554 00:08:03.554 real 0m4.286s 00:08:03.554 user 0m5.145s 00:08:03.554 sys 0m0.511s 00:08:03.554 15:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.554 15:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.554 15:35:01 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:03.554 15:35:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:03.554 15:35:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.554 15:35:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:03.554 ************************************ 00:08:03.554 START TEST raid_write_error_test 00:08:03.554 ************************************ 00:08:03.554 15:35:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:08:03.554 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:03.554 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:03.554 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:03.554 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:03.554 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:03.554 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:03.554 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:03.554 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.E3Bnye8fQ3 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61373 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61373 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61373 ']' 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.555 15:35:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.555 [2024-11-25 15:35:02.068791] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:08:03.555 [2024-11-25 15:35:02.068997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61373 ] 00:08:03.814 [2024-11-25 15:35:02.238301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.814 [2024-11-25 15:35:02.344956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.074 [2024-11-25 15:35:02.538592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.074 [2024-11-25 15:35:02.538710] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.334 15:35:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.334 15:35:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:04.334 15:35:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:04.334 15:35:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:04.334 15:35:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.334 15:35:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.334 BaseBdev1_malloc 00:08:04.334 15:35:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.334 15:35:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:04.334 15:35:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.334 15:35:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.334 true 00:08:04.334 15:35:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.334 15:35:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:04.334 15:35:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.334 15:35:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.334 [2024-11-25 15:35:02.958134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:04.334 [2024-11-25 15:35:02.958188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.334 [2024-11-25 15:35:02.958206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:04.334 [2024-11-25 15:35:02.958229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.334 [2024-11-25 15:35:02.960275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.334 [2024-11-25 15:35:02.960324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:04.334 BaseBdev1 00:08:04.334 15:35:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.334 15:35:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:04.334 15:35:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:04.334 15:35:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.334 15:35:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.335 BaseBdev2_malloc 00:08:04.335 15:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.335 15:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:04.335 15:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.335 15:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.594 true 00:08:04.594 15:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.594 15:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:04.594 15:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.594 15:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.594 [2024-11-25 15:35:03.024934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:04.594 [2024-11-25 15:35:03.024988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.594 [2024-11-25 15:35:03.025003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:04.594 [2024-11-25 15:35:03.025023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.594 [2024-11-25 15:35:03.026989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.594 [2024-11-25 15:35:03.027047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:04.594 BaseBdev2 00:08:04.594 15:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.594 15:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:04.594 15:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.594 15:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.594 [2024-11-25 15:35:03.036968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.595 [2024-11-25 15:35:03.038744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.595 [2024-11-25 15:35:03.038920] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:04.595 [2024-11-25 15:35:03.038936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:04.595 [2024-11-25 15:35:03.039171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:04.595 [2024-11-25 15:35:03.039354] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:04.595 [2024-11-25 15:35:03.039372] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:04.595 [2024-11-25 15:35:03.039545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.595 15:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.595 15:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:04.595 15:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.595 15:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.595 15:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.595 15:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.595 15:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.595 15:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.595 15:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.595 15:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.595 15:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.595 15:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.595 15:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.595 15:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.595 15:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.595 15:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.595 15:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.595 "name": "raid_bdev1", 00:08:04.595 "uuid": "b9df624a-26c3-4452-9bbf-9d045f6190ac", 00:08:04.595 "strip_size_kb": 64, 00:08:04.595 "state": "online", 00:08:04.595 "raid_level": "raid0", 00:08:04.595 "superblock": true, 00:08:04.595 "num_base_bdevs": 2, 00:08:04.595 "num_base_bdevs_discovered": 2, 00:08:04.595 "num_base_bdevs_operational": 2, 00:08:04.595 "base_bdevs_list": [ 00:08:04.595 { 00:08:04.595 "name": "BaseBdev1", 00:08:04.595 "uuid": "bca63930-367d-522c-b9d4-45aaeabbba18", 00:08:04.595 "is_configured": true, 00:08:04.595 "data_offset": 2048, 00:08:04.595 "data_size": 63488 00:08:04.595 }, 00:08:04.595 { 00:08:04.595 "name": "BaseBdev2", 00:08:04.595 "uuid": "d3cd4956-f86a-5044-acfb-8f62429ba761", 00:08:04.595 "is_configured": true, 00:08:04.595 "data_offset": 2048, 00:08:04.595 "data_size": 63488 00:08:04.595 } 00:08:04.595 ] 00:08:04.595 }' 00:08:04.595 15:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.595 15:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.855 15:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:04.855 15:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:04.855 [2024-11-25 15:35:03.529241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:05.794 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:05.794 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.794 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.794 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.795 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:05.795 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:05.795 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:05.795 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:05.795 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.795 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.795 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.795 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.795 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.795 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.795 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.795 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.795 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.795 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.795 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.795 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.795 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.054 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.054 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.054 "name": "raid_bdev1", 00:08:06.054 "uuid": "b9df624a-26c3-4452-9bbf-9d045f6190ac", 00:08:06.054 "strip_size_kb": 64, 00:08:06.054 "state": "online", 00:08:06.054 "raid_level": "raid0", 00:08:06.054 "superblock": true, 00:08:06.054 "num_base_bdevs": 2, 00:08:06.054 "num_base_bdevs_discovered": 2, 00:08:06.054 "num_base_bdevs_operational": 2, 00:08:06.054 "base_bdevs_list": [ 00:08:06.054 { 00:08:06.054 "name": "BaseBdev1", 00:08:06.054 "uuid": "bca63930-367d-522c-b9d4-45aaeabbba18", 00:08:06.054 "is_configured": true, 00:08:06.054 "data_offset": 2048, 00:08:06.054 "data_size": 63488 00:08:06.054 }, 00:08:06.054 { 00:08:06.054 "name": "BaseBdev2", 00:08:06.054 "uuid": "d3cd4956-f86a-5044-acfb-8f62429ba761", 00:08:06.054 "is_configured": true, 00:08:06.054 "data_offset": 2048, 00:08:06.054 "data_size": 63488 00:08:06.054 } 00:08:06.054 ] 00:08:06.054 }' 00:08:06.054 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.054 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.313 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:06.313 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.313 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.313 [2024-11-25 15:35:04.840931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:06.313 [2024-11-25 15:35:04.840968] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.313 [2024-11-25 15:35:04.843642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.313 [2024-11-25 15:35:04.843683] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.313 [2024-11-25 15:35:04.843714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:06.313 [2024-11-25 15:35:04.843726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:06.313 { 00:08:06.313 "results": [ 00:08:06.313 { 00:08:06.313 "job": "raid_bdev1", 00:08:06.313 "core_mask": "0x1", 00:08:06.313 "workload": "randrw", 00:08:06.313 "percentage": 50, 00:08:06.313 "status": "finished", 00:08:06.313 "queue_depth": 1, 00:08:06.313 "io_size": 131072, 00:08:06.313 "runtime": 1.312416, 00:08:06.313 "iops": 16824.69582815205, 00:08:06.313 "mibps": 2103.086978519006, 00:08:06.313 "io_failed": 1, 00:08:06.313 "io_timeout": 0, 00:08:06.313 "avg_latency_us": 82.45037864031208, 00:08:06.313 "min_latency_us": 25.823580786026202, 00:08:06.313 "max_latency_us": 1473.844541484716 00:08:06.313 } 00:08:06.313 ], 00:08:06.313 "core_count": 1 00:08:06.313 } 00:08:06.313 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.313 15:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61373 00:08:06.313 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61373 ']' 00:08:06.313 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61373 00:08:06.313 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:06.313 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.313 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61373 00:08:06.313 killing process with pid 61373 00:08:06.313 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.313 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.313 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61373' 00:08:06.313 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61373 00:08:06.313 [2024-11-25 15:35:04.889235] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:06.313 15:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61373 00:08:06.572 [2024-11-25 15:35:05.021595] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:07.508 15:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.E3Bnye8fQ3 00:08:07.508 15:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:07.508 15:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:07.508 15:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:08:07.508 15:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:07.508 15:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:07.508 15:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:07.508 15:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:08:07.508 00:08:07.508 real 0m4.177s 00:08:07.508 user 0m4.947s 00:08:07.508 sys 0m0.512s 00:08:07.508 ************************************ 00:08:07.508 END TEST raid_write_error_test 00:08:07.508 ************************************ 00:08:07.508 15:35:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.508 15:35:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.768 15:35:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:07.768 15:35:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:07.768 15:35:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:07.768 15:35:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.768 15:35:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:07.768 ************************************ 00:08:07.768 START TEST raid_state_function_test 00:08:07.768 ************************************ 00:08:07.768 15:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:08:07.768 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:07.768 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:07.768 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:07.768 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:07.768 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:07.768 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.768 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:07.768 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.768 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.768 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:07.768 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.768 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.768 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:07.768 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:07.768 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:07.768 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:07.769 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:07.769 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:07.769 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:07.769 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:07.769 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:07.769 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:07.769 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:07.769 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61511 00:08:07.769 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:07.769 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61511' 00:08:07.769 Process raid pid: 61511 00:08:07.769 15:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61511 00:08:07.769 15:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61511 ']' 00:08:07.769 15:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.769 15:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.769 15:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.769 15:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.769 15:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.769 [2024-11-25 15:35:06.306740] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:08:07.769 [2024-11-25 15:35:06.306939] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.028 [2024-11-25 15:35:06.479451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.028 [2024-11-25 15:35:06.588466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.287 [2024-11-25 15:35:06.800589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.287 [2024-11-25 15:35:06.800680] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.546 [2024-11-25 15:35:07.137356] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.546 [2024-11-25 15:35:07.137465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.546 [2024-11-25 15:35:07.137480] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.546 [2024-11-25 15:35:07.137490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.546 "name": "Existed_Raid", 00:08:08.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.546 "strip_size_kb": 64, 00:08:08.546 "state": "configuring", 00:08:08.546 "raid_level": "concat", 00:08:08.546 "superblock": false, 00:08:08.546 "num_base_bdevs": 2, 00:08:08.546 "num_base_bdevs_discovered": 0, 00:08:08.546 "num_base_bdevs_operational": 2, 00:08:08.546 "base_bdevs_list": [ 00:08:08.546 { 00:08:08.546 "name": "BaseBdev1", 00:08:08.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.546 "is_configured": false, 00:08:08.546 "data_offset": 0, 00:08:08.546 "data_size": 0 00:08:08.546 }, 00:08:08.546 { 00:08:08.546 "name": "BaseBdev2", 00:08:08.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.546 "is_configured": false, 00:08:08.546 "data_offset": 0, 00:08:08.546 "data_size": 0 00:08:08.546 } 00:08:08.546 ] 00:08:08.546 }' 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.546 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.118 [2024-11-25 15:35:07.564564] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.118 [2024-11-25 15:35:07.564650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.118 [2024-11-25 15:35:07.572553] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.118 [2024-11-25 15:35:07.572633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.118 [2024-11-25 15:35:07.572677] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.118 [2024-11-25 15:35:07.572703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.118 [2024-11-25 15:35:07.616723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.118 BaseBdev1 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.118 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.118 [ 00:08:09.118 { 00:08:09.118 "name": "BaseBdev1", 00:08:09.118 "aliases": [ 00:08:09.118 "44ff7d73-0939-4d9c-84c0-973ee043d920" 00:08:09.118 ], 00:08:09.118 "product_name": "Malloc disk", 00:08:09.118 "block_size": 512, 00:08:09.118 "num_blocks": 65536, 00:08:09.119 "uuid": "44ff7d73-0939-4d9c-84c0-973ee043d920", 00:08:09.119 "assigned_rate_limits": { 00:08:09.119 "rw_ios_per_sec": 0, 00:08:09.119 "rw_mbytes_per_sec": 0, 00:08:09.119 "r_mbytes_per_sec": 0, 00:08:09.119 "w_mbytes_per_sec": 0 00:08:09.119 }, 00:08:09.119 "claimed": true, 00:08:09.119 "claim_type": "exclusive_write", 00:08:09.119 "zoned": false, 00:08:09.119 "supported_io_types": { 00:08:09.119 "read": true, 00:08:09.119 "write": true, 00:08:09.119 "unmap": true, 00:08:09.119 "flush": true, 00:08:09.119 "reset": true, 00:08:09.119 "nvme_admin": false, 00:08:09.119 "nvme_io": false, 00:08:09.119 "nvme_io_md": false, 00:08:09.119 "write_zeroes": true, 00:08:09.119 "zcopy": true, 00:08:09.119 "get_zone_info": false, 00:08:09.119 "zone_management": false, 00:08:09.119 "zone_append": false, 00:08:09.119 "compare": false, 00:08:09.119 "compare_and_write": false, 00:08:09.119 "abort": true, 00:08:09.119 "seek_hole": false, 00:08:09.119 "seek_data": false, 00:08:09.119 "copy": true, 00:08:09.119 "nvme_iov_md": false 00:08:09.119 }, 00:08:09.119 "memory_domains": [ 00:08:09.119 { 00:08:09.119 "dma_device_id": "system", 00:08:09.119 "dma_device_type": 1 00:08:09.119 }, 00:08:09.119 { 00:08:09.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.119 "dma_device_type": 2 00:08:09.119 } 00:08:09.119 ], 00:08:09.119 "driver_specific": {} 00:08:09.119 } 00:08:09.119 ] 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.119 "name": "Existed_Raid", 00:08:09.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.119 "strip_size_kb": 64, 00:08:09.119 "state": "configuring", 00:08:09.119 "raid_level": "concat", 00:08:09.119 "superblock": false, 00:08:09.119 "num_base_bdevs": 2, 00:08:09.119 "num_base_bdevs_discovered": 1, 00:08:09.119 "num_base_bdevs_operational": 2, 00:08:09.119 "base_bdevs_list": [ 00:08:09.119 { 00:08:09.119 "name": "BaseBdev1", 00:08:09.119 "uuid": "44ff7d73-0939-4d9c-84c0-973ee043d920", 00:08:09.119 "is_configured": true, 00:08:09.119 "data_offset": 0, 00:08:09.119 "data_size": 65536 00:08:09.119 }, 00:08:09.119 { 00:08:09.119 "name": "BaseBdev2", 00:08:09.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.119 "is_configured": false, 00:08:09.119 "data_offset": 0, 00:08:09.119 "data_size": 0 00:08:09.119 } 00:08:09.119 ] 00:08:09.119 }' 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.119 15:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.690 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:09.690 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.690 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.690 [2024-11-25 15:35:08.087947] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.690 [2024-11-25 15:35:08.088061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:09.690 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.691 [2024-11-25 15:35:08.099967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.691 [2024-11-25 15:35:08.101789] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.691 [2024-11-25 15:35:08.101835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.691 "name": "Existed_Raid", 00:08:09.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.691 "strip_size_kb": 64, 00:08:09.691 "state": "configuring", 00:08:09.691 "raid_level": "concat", 00:08:09.691 "superblock": false, 00:08:09.691 "num_base_bdevs": 2, 00:08:09.691 "num_base_bdevs_discovered": 1, 00:08:09.691 "num_base_bdevs_operational": 2, 00:08:09.691 "base_bdevs_list": [ 00:08:09.691 { 00:08:09.691 "name": "BaseBdev1", 00:08:09.691 "uuid": "44ff7d73-0939-4d9c-84c0-973ee043d920", 00:08:09.691 "is_configured": true, 00:08:09.691 "data_offset": 0, 00:08:09.691 "data_size": 65536 00:08:09.691 }, 00:08:09.691 { 00:08:09.691 "name": "BaseBdev2", 00:08:09.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.691 "is_configured": false, 00:08:09.691 "data_offset": 0, 00:08:09.691 "data_size": 0 00:08:09.691 } 00:08:09.691 ] 00:08:09.691 }' 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.691 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.951 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:09.951 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.951 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.951 [2024-11-25 15:35:08.595650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.951 [2024-11-25 15:35:08.595797] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:09.951 [2024-11-25 15:35:08.595822] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:09.951 [2024-11-25 15:35:08.596143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:09.951 [2024-11-25 15:35:08.596359] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:09.951 [2024-11-25 15:35:08.596413] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:09.951 [2024-11-25 15:35:08.596728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.951 BaseBdev2 00:08:09.951 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.951 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:09.951 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:09.951 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.951 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:09.951 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.951 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.951 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:09.951 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.951 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.951 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.951 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:09.951 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.951 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.951 [ 00:08:09.951 { 00:08:09.951 "name": "BaseBdev2", 00:08:09.951 "aliases": [ 00:08:09.951 "f0aaa486-320a-404f-946a-09efad5b2293" 00:08:09.951 ], 00:08:09.951 "product_name": "Malloc disk", 00:08:09.951 "block_size": 512, 00:08:09.951 "num_blocks": 65536, 00:08:09.951 "uuid": "f0aaa486-320a-404f-946a-09efad5b2293", 00:08:09.951 "assigned_rate_limits": { 00:08:09.951 "rw_ios_per_sec": 0, 00:08:09.951 "rw_mbytes_per_sec": 0, 00:08:09.951 "r_mbytes_per_sec": 0, 00:08:09.951 "w_mbytes_per_sec": 0 00:08:09.951 }, 00:08:09.951 "claimed": true, 00:08:09.951 "claim_type": "exclusive_write", 00:08:09.951 "zoned": false, 00:08:09.951 "supported_io_types": { 00:08:09.951 "read": true, 00:08:09.951 "write": true, 00:08:09.951 "unmap": true, 00:08:09.951 "flush": true, 00:08:09.951 "reset": true, 00:08:09.951 "nvme_admin": false, 00:08:09.951 "nvme_io": false, 00:08:09.951 "nvme_io_md": false, 00:08:10.211 "write_zeroes": true, 00:08:10.211 "zcopy": true, 00:08:10.211 "get_zone_info": false, 00:08:10.211 "zone_management": false, 00:08:10.211 "zone_append": false, 00:08:10.211 "compare": false, 00:08:10.211 "compare_and_write": false, 00:08:10.211 "abort": true, 00:08:10.211 "seek_hole": false, 00:08:10.211 "seek_data": false, 00:08:10.211 "copy": true, 00:08:10.211 "nvme_iov_md": false 00:08:10.211 }, 00:08:10.211 "memory_domains": [ 00:08:10.211 { 00:08:10.211 "dma_device_id": "system", 00:08:10.211 "dma_device_type": 1 00:08:10.211 }, 00:08:10.211 { 00:08:10.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.211 "dma_device_type": 2 00:08:10.211 } 00:08:10.211 ], 00:08:10.211 "driver_specific": {} 00:08:10.211 } 00:08:10.211 ] 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.211 "name": "Existed_Raid", 00:08:10.211 "uuid": "8856ed19-c572-4aaf-b9f0-d59ab9463004", 00:08:10.211 "strip_size_kb": 64, 00:08:10.211 "state": "online", 00:08:10.211 "raid_level": "concat", 00:08:10.211 "superblock": false, 00:08:10.211 "num_base_bdevs": 2, 00:08:10.211 "num_base_bdevs_discovered": 2, 00:08:10.211 "num_base_bdevs_operational": 2, 00:08:10.211 "base_bdevs_list": [ 00:08:10.211 { 00:08:10.211 "name": "BaseBdev1", 00:08:10.211 "uuid": "44ff7d73-0939-4d9c-84c0-973ee043d920", 00:08:10.211 "is_configured": true, 00:08:10.211 "data_offset": 0, 00:08:10.211 "data_size": 65536 00:08:10.211 }, 00:08:10.211 { 00:08:10.211 "name": "BaseBdev2", 00:08:10.211 "uuid": "f0aaa486-320a-404f-946a-09efad5b2293", 00:08:10.211 "is_configured": true, 00:08:10.211 "data_offset": 0, 00:08:10.211 "data_size": 65536 00:08:10.211 } 00:08:10.211 ] 00:08:10.211 }' 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.211 15:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.470 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:10.470 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:10.470 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:10.470 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:10.470 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:10.470 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:10.470 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:10.470 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:10.470 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.470 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.470 [2024-11-25 15:35:09.059143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:10.470 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.470 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:10.470 "name": "Existed_Raid", 00:08:10.470 "aliases": [ 00:08:10.470 "8856ed19-c572-4aaf-b9f0-d59ab9463004" 00:08:10.470 ], 00:08:10.470 "product_name": "Raid Volume", 00:08:10.470 "block_size": 512, 00:08:10.470 "num_blocks": 131072, 00:08:10.470 "uuid": "8856ed19-c572-4aaf-b9f0-d59ab9463004", 00:08:10.470 "assigned_rate_limits": { 00:08:10.470 "rw_ios_per_sec": 0, 00:08:10.470 "rw_mbytes_per_sec": 0, 00:08:10.470 "r_mbytes_per_sec": 0, 00:08:10.470 "w_mbytes_per_sec": 0 00:08:10.470 }, 00:08:10.470 "claimed": false, 00:08:10.470 "zoned": false, 00:08:10.470 "supported_io_types": { 00:08:10.470 "read": true, 00:08:10.470 "write": true, 00:08:10.470 "unmap": true, 00:08:10.470 "flush": true, 00:08:10.470 "reset": true, 00:08:10.470 "nvme_admin": false, 00:08:10.470 "nvme_io": false, 00:08:10.470 "nvme_io_md": false, 00:08:10.470 "write_zeroes": true, 00:08:10.470 "zcopy": false, 00:08:10.470 "get_zone_info": false, 00:08:10.470 "zone_management": false, 00:08:10.470 "zone_append": false, 00:08:10.470 "compare": false, 00:08:10.470 "compare_and_write": false, 00:08:10.470 "abort": false, 00:08:10.470 "seek_hole": false, 00:08:10.470 "seek_data": false, 00:08:10.471 "copy": false, 00:08:10.471 "nvme_iov_md": false 00:08:10.471 }, 00:08:10.471 "memory_domains": [ 00:08:10.471 { 00:08:10.471 "dma_device_id": "system", 00:08:10.471 "dma_device_type": 1 00:08:10.471 }, 00:08:10.471 { 00:08:10.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.471 "dma_device_type": 2 00:08:10.471 }, 00:08:10.471 { 00:08:10.471 "dma_device_id": "system", 00:08:10.471 "dma_device_type": 1 00:08:10.471 }, 00:08:10.471 { 00:08:10.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.471 "dma_device_type": 2 00:08:10.471 } 00:08:10.471 ], 00:08:10.471 "driver_specific": { 00:08:10.471 "raid": { 00:08:10.471 "uuid": "8856ed19-c572-4aaf-b9f0-d59ab9463004", 00:08:10.471 "strip_size_kb": 64, 00:08:10.471 "state": "online", 00:08:10.471 "raid_level": "concat", 00:08:10.471 "superblock": false, 00:08:10.471 "num_base_bdevs": 2, 00:08:10.471 "num_base_bdevs_discovered": 2, 00:08:10.471 "num_base_bdevs_operational": 2, 00:08:10.471 "base_bdevs_list": [ 00:08:10.471 { 00:08:10.471 "name": "BaseBdev1", 00:08:10.471 "uuid": "44ff7d73-0939-4d9c-84c0-973ee043d920", 00:08:10.471 "is_configured": true, 00:08:10.471 "data_offset": 0, 00:08:10.471 "data_size": 65536 00:08:10.471 }, 00:08:10.471 { 00:08:10.471 "name": "BaseBdev2", 00:08:10.471 "uuid": "f0aaa486-320a-404f-946a-09efad5b2293", 00:08:10.471 "is_configured": true, 00:08:10.471 "data_offset": 0, 00:08:10.471 "data_size": 65536 00:08:10.471 } 00:08:10.471 ] 00:08:10.471 } 00:08:10.471 } 00:08:10.471 }' 00:08:10.471 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:10.471 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:10.471 BaseBdev2' 00:08:10.471 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.729 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:10.729 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.729 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:10.729 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.729 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.729 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.729 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.729 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.729 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.729 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.729 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:10.729 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.729 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.729 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.729 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.729 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.730 [2024-11-25 15:35:09.266559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:10.730 [2024-11-25 15:35:09.266634] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:10.730 [2024-11-25 15:35:09.266690] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.730 "name": "Existed_Raid", 00:08:10.730 "uuid": "8856ed19-c572-4aaf-b9f0-d59ab9463004", 00:08:10.730 "strip_size_kb": 64, 00:08:10.730 "state": "offline", 00:08:10.730 "raid_level": "concat", 00:08:10.730 "superblock": false, 00:08:10.730 "num_base_bdevs": 2, 00:08:10.730 "num_base_bdevs_discovered": 1, 00:08:10.730 "num_base_bdevs_operational": 1, 00:08:10.730 "base_bdevs_list": [ 00:08:10.730 { 00:08:10.730 "name": null, 00:08:10.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.730 "is_configured": false, 00:08:10.730 "data_offset": 0, 00:08:10.730 "data_size": 65536 00:08:10.730 }, 00:08:10.730 { 00:08:10.730 "name": "BaseBdev2", 00:08:10.730 "uuid": "f0aaa486-320a-404f-946a-09efad5b2293", 00:08:10.730 "is_configured": true, 00:08:10.730 "data_offset": 0, 00:08:10.730 "data_size": 65536 00:08:10.730 } 00:08:10.730 ] 00:08:10.730 }' 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.730 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.299 [2024-11-25 15:35:09.832530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:11.299 [2024-11-25 15:35:09.832634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.299 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.558 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:11.558 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:11.558 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:11.558 15:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61511 00:08:11.558 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61511 ']' 00:08:11.558 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61511 00:08:11.558 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:11.558 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.558 15:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61511 00:08:11.558 15:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.559 15:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.559 15:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61511' 00:08:11.559 killing process with pid 61511 00:08:11.559 15:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61511 00:08:11.559 [2024-11-25 15:35:10.009427] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:11.559 15:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61511 00:08:11.559 [2024-11-25 15:35:10.025460] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:12.496 00:08:12.496 real 0m4.851s 00:08:12.496 user 0m7.046s 00:08:12.496 sys 0m0.764s 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.496 ************************************ 00:08:12.496 END TEST raid_state_function_test 00:08:12.496 ************************************ 00:08:12.496 15:35:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:12.496 15:35:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:12.496 15:35:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.496 15:35:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.496 ************************************ 00:08:12.496 START TEST raid_state_function_test_sb 00:08:12.496 ************************************ 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:12.496 Process raid pid: 61759 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61759 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61759' 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61759 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61759 ']' 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.496 15:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.756 [2024-11-25 15:35:11.231759] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:08:12.756 [2024-11-25 15:35:11.231875] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.756 [2024-11-25 15:35:11.404649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.017 [2024-11-25 15:35:11.510097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.277 [2024-11-25 15:35:11.710468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.277 [2024-11-25 15:35:11.710517] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.536 [2024-11-25 15:35:12.052934] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.536 [2024-11-25 15:35:12.052987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.536 [2024-11-25 15:35:12.052997] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:13.536 [2024-11-25 15:35:12.053021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.536 "name": "Existed_Raid", 00:08:13.536 "uuid": "5a83d8a3-5174-4911-8bfe-6dd77c9f2d94", 00:08:13.536 "strip_size_kb": 64, 00:08:13.536 "state": "configuring", 00:08:13.536 "raid_level": "concat", 00:08:13.536 "superblock": true, 00:08:13.536 "num_base_bdevs": 2, 00:08:13.536 "num_base_bdevs_discovered": 0, 00:08:13.536 "num_base_bdevs_operational": 2, 00:08:13.536 "base_bdevs_list": [ 00:08:13.536 { 00:08:13.536 "name": "BaseBdev1", 00:08:13.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.536 "is_configured": false, 00:08:13.536 "data_offset": 0, 00:08:13.536 "data_size": 0 00:08:13.536 }, 00:08:13.536 { 00:08:13.536 "name": "BaseBdev2", 00:08:13.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.536 "is_configured": false, 00:08:13.536 "data_offset": 0, 00:08:13.536 "data_size": 0 00:08:13.536 } 00:08:13.536 ] 00:08:13.536 }' 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.536 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.796 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:13.796 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.796 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.796 [2024-11-25 15:35:12.472148] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:13.796 [2024-11-25 15:35:12.472230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.057 [2024-11-25 15:35:12.484153] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:14.057 [2024-11-25 15:35:12.484195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:14.057 [2024-11-25 15:35:12.484204] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.057 [2024-11-25 15:35:12.484216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.057 [2024-11-25 15:35:12.529984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.057 BaseBdev1 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.057 [ 00:08:14.057 { 00:08:14.057 "name": "BaseBdev1", 00:08:14.057 "aliases": [ 00:08:14.057 "15508450-5642-42f6-989e-e9baa16a82ed" 00:08:14.057 ], 00:08:14.057 "product_name": "Malloc disk", 00:08:14.057 "block_size": 512, 00:08:14.057 "num_blocks": 65536, 00:08:14.057 "uuid": "15508450-5642-42f6-989e-e9baa16a82ed", 00:08:14.057 "assigned_rate_limits": { 00:08:14.057 "rw_ios_per_sec": 0, 00:08:14.057 "rw_mbytes_per_sec": 0, 00:08:14.057 "r_mbytes_per_sec": 0, 00:08:14.057 "w_mbytes_per_sec": 0 00:08:14.057 }, 00:08:14.057 "claimed": true, 00:08:14.057 "claim_type": "exclusive_write", 00:08:14.057 "zoned": false, 00:08:14.057 "supported_io_types": { 00:08:14.057 "read": true, 00:08:14.057 "write": true, 00:08:14.057 "unmap": true, 00:08:14.057 "flush": true, 00:08:14.057 "reset": true, 00:08:14.057 "nvme_admin": false, 00:08:14.057 "nvme_io": false, 00:08:14.057 "nvme_io_md": false, 00:08:14.057 "write_zeroes": true, 00:08:14.057 "zcopy": true, 00:08:14.057 "get_zone_info": false, 00:08:14.057 "zone_management": false, 00:08:14.057 "zone_append": false, 00:08:14.057 "compare": false, 00:08:14.057 "compare_and_write": false, 00:08:14.057 "abort": true, 00:08:14.057 "seek_hole": false, 00:08:14.057 "seek_data": false, 00:08:14.057 "copy": true, 00:08:14.057 "nvme_iov_md": false 00:08:14.057 }, 00:08:14.057 "memory_domains": [ 00:08:14.057 { 00:08:14.057 "dma_device_id": "system", 00:08:14.057 "dma_device_type": 1 00:08:14.057 }, 00:08:14.057 { 00:08:14.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.057 "dma_device_type": 2 00:08:14.057 } 00:08:14.057 ], 00:08:14.057 "driver_specific": {} 00:08:14.057 } 00:08:14.057 ] 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.057 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.058 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.058 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.058 "name": "Existed_Raid", 00:08:14.058 "uuid": "e27ce974-59cb-457c-84a3-3e4c6ecfdfa4", 00:08:14.058 "strip_size_kb": 64, 00:08:14.058 "state": "configuring", 00:08:14.058 "raid_level": "concat", 00:08:14.058 "superblock": true, 00:08:14.058 "num_base_bdevs": 2, 00:08:14.058 "num_base_bdevs_discovered": 1, 00:08:14.058 "num_base_bdevs_operational": 2, 00:08:14.058 "base_bdevs_list": [ 00:08:14.058 { 00:08:14.058 "name": "BaseBdev1", 00:08:14.058 "uuid": "15508450-5642-42f6-989e-e9baa16a82ed", 00:08:14.058 "is_configured": true, 00:08:14.058 "data_offset": 2048, 00:08:14.058 "data_size": 63488 00:08:14.058 }, 00:08:14.058 { 00:08:14.058 "name": "BaseBdev2", 00:08:14.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.058 "is_configured": false, 00:08:14.058 "data_offset": 0, 00:08:14.058 "data_size": 0 00:08:14.058 } 00:08:14.058 ] 00:08:14.058 }' 00:08:14.058 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.058 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.318 [2024-11-25 15:35:12.961278] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:14.318 [2024-11-25 15:35:12.961402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.318 [2024-11-25 15:35:12.973296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.318 [2024-11-25 15:35:12.975063] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.318 [2024-11-25 15:35:12.975148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.318 15:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.578 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.578 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.579 "name": "Existed_Raid", 00:08:14.579 "uuid": "0f9e78ee-ce28-4fd2-8ce5-cb1a5bbad019", 00:08:14.579 "strip_size_kb": 64, 00:08:14.579 "state": "configuring", 00:08:14.579 "raid_level": "concat", 00:08:14.579 "superblock": true, 00:08:14.579 "num_base_bdevs": 2, 00:08:14.579 "num_base_bdevs_discovered": 1, 00:08:14.579 "num_base_bdevs_operational": 2, 00:08:14.579 "base_bdevs_list": [ 00:08:14.579 { 00:08:14.579 "name": "BaseBdev1", 00:08:14.579 "uuid": "15508450-5642-42f6-989e-e9baa16a82ed", 00:08:14.579 "is_configured": true, 00:08:14.579 "data_offset": 2048, 00:08:14.579 "data_size": 63488 00:08:14.579 }, 00:08:14.579 { 00:08:14.579 "name": "BaseBdev2", 00:08:14.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.579 "is_configured": false, 00:08:14.579 "data_offset": 0, 00:08:14.579 "data_size": 0 00:08:14.579 } 00:08:14.579 ] 00:08:14.579 }' 00:08:14.579 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.579 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.839 [2024-11-25 15:35:13.447513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:14.839 [2024-11-25 15:35:13.447829] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:14.839 [2024-11-25 15:35:13.447890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:14.839 [2024-11-25 15:35:13.448192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:14.839 [2024-11-25 15:35:13.448386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:14.839 [2024-11-25 15:35:13.448437] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:14.839 BaseBdev2 00:08:14.839 [2024-11-25 15:35:13.448638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.839 [ 00:08:14.839 { 00:08:14.839 "name": "BaseBdev2", 00:08:14.839 "aliases": [ 00:08:14.839 "d663771a-51e2-4208-9ba0-3bf9922c00ab" 00:08:14.839 ], 00:08:14.839 "product_name": "Malloc disk", 00:08:14.839 "block_size": 512, 00:08:14.839 "num_blocks": 65536, 00:08:14.839 "uuid": "d663771a-51e2-4208-9ba0-3bf9922c00ab", 00:08:14.839 "assigned_rate_limits": { 00:08:14.839 "rw_ios_per_sec": 0, 00:08:14.839 "rw_mbytes_per_sec": 0, 00:08:14.839 "r_mbytes_per_sec": 0, 00:08:14.839 "w_mbytes_per_sec": 0 00:08:14.839 }, 00:08:14.839 "claimed": true, 00:08:14.839 "claim_type": "exclusive_write", 00:08:14.839 "zoned": false, 00:08:14.839 "supported_io_types": { 00:08:14.839 "read": true, 00:08:14.839 "write": true, 00:08:14.839 "unmap": true, 00:08:14.839 "flush": true, 00:08:14.839 "reset": true, 00:08:14.839 "nvme_admin": false, 00:08:14.839 "nvme_io": false, 00:08:14.839 "nvme_io_md": false, 00:08:14.839 "write_zeroes": true, 00:08:14.839 "zcopy": true, 00:08:14.839 "get_zone_info": false, 00:08:14.839 "zone_management": false, 00:08:14.839 "zone_append": false, 00:08:14.839 "compare": false, 00:08:14.839 "compare_and_write": false, 00:08:14.839 "abort": true, 00:08:14.839 "seek_hole": false, 00:08:14.839 "seek_data": false, 00:08:14.839 "copy": true, 00:08:14.839 "nvme_iov_md": false 00:08:14.839 }, 00:08:14.839 "memory_domains": [ 00:08:14.839 { 00:08:14.839 "dma_device_id": "system", 00:08:14.839 "dma_device_type": 1 00:08:14.839 }, 00:08:14.839 { 00:08:14.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.839 "dma_device_type": 2 00:08:14.839 } 00:08:14.839 ], 00:08:14.839 "driver_specific": {} 00:08:14.839 } 00:08:14.839 ] 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.839 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.840 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.840 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.840 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.840 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.099 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.099 "name": "Existed_Raid", 00:08:15.099 "uuid": "0f9e78ee-ce28-4fd2-8ce5-cb1a5bbad019", 00:08:15.099 "strip_size_kb": 64, 00:08:15.099 "state": "online", 00:08:15.099 "raid_level": "concat", 00:08:15.099 "superblock": true, 00:08:15.099 "num_base_bdevs": 2, 00:08:15.099 "num_base_bdevs_discovered": 2, 00:08:15.099 "num_base_bdevs_operational": 2, 00:08:15.099 "base_bdevs_list": [ 00:08:15.100 { 00:08:15.100 "name": "BaseBdev1", 00:08:15.100 "uuid": "15508450-5642-42f6-989e-e9baa16a82ed", 00:08:15.100 "is_configured": true, 00:08:15.100 "data_offset": 2048, 00:08:15.100 "data_size": 63488 00:08:15.100 }, 00:08:15.100 { 00:08:15.100 "name": "BaseBdev2", 00:08:15.100 "uuid": "d663771a-51e2-4208-9ba0-3bf9922c00ab", 00:08:15.100 "is_configured": true, 00:08:15.100 "data_offset": 2048, 00:08:15.100 "data_size": 63488 00:08:15.100 } 00:08:15.100 ] 00:08:15.100 }' 00:08:15.100 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.100 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.360 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:15.360 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:15.360 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:15.360 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:15.360 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:15.360 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:15.360 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:15.360 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.360 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.360 15:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:15.360 [2024-11-25 15:35:13.966967] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.360 15:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.360 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:15.360 "name": "Existed_Raid", 00:08:15.360 "aliases": [ 00:08:15.360 "0f9e78ee-ce28-4fd2-8ce5-cb1a5bbad019" 00:08:15.360 ], 00:08:15.360 "product_name": "Raid Volume", 00:08:15.360 "block_size": 512, 00:08:15.360 "num_blocks": 126976, 00:08:15.360 "uuid": "0f9e78ee-ce28-4fd2-8ce5-cb1a5bbad019", 00:08:15.360 "assigned_rate_limits": { 00:08:15.360 "rw_ios_per_sec": 0, 00:08:15.360 "rw_mbytes_per_sec": 0, 00:08:15.360 "r_mbytes_per_sec": 0, 00:08:15.360 "w_mbytes_per_sec": 0 00:08:15.360 }, 00:08:15.360 "claimed": false, 00:08:15.360 "zoned": false, 00:08:15.360 "supported_io_types": { 00:08:15.360 "read": true, 00:08:15.360 "write": true, 00:08:15.360 "unmap": true, 00:08:15.360 "flush": true, 00:08:15.360 "reset": true, 00:08:15.360 "nvme_admin": false, 00:08:15.360 "nvme_io": false, 00:08:15.360 "nvme_io_md": false, 00:08:15.360 "write_zeroes": true, 00:08:15.360 "zcopy": false, 00:08:15.360 "get_zone_info": false, 00:08:15.360 "zone_management": false, 00:08:15.360 "zone_append": false, 00:08:15.360 "compare": false, 00:08:15.360 "compare_and_write": false, 00:08:15.360 "abort": false, 00:08:15.360 "seek_hole": false, 00:08:15.360 "seek_data": false, 00:08:15.360 "copy": false, 00:08:15.360 "nvme_iov_md": false 00:08:15.360 }, 00:08:15.360 "memory_domains": [ 00:08:15.360 { 00:08:15.360 "dma_device_id": "system", 00:08:15.360 "dma_device_type": 1 00:08:15.360 }, 00:08:15.360 { 00:08:15.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.360 "dma_device_type": 2 00:08:15.360 }, 00:08:15.360 { 00:08:15.360 "dma_device_id": "system", 00:08:15.360 "dma_device_type": 1 00:08:15.360 }, 00:08:15.360 { 00:08:15.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.360 "dma_device_type": 2 00:08:15.360 } 00:08:15.360 ], 00:08:15.360 "driver_specific": { 00:08:15.360 "raid": { 00:08:15.360 "uuid": "0f9e78ee-ce28-4fd2-8ce5-cb1a5bbad019", 00:08:15.360 "strip_size_kb": 64, 00:08:15.360 "state": "online", 00:08:15.360 "raid_level": "concat", 00:08:15.360 "superblock": true, 00:08:15.360 "num_base_bdevs": 2, 00:08:15.360 "num_base_bdevs_discovered": 2, 00:08:15.360 "num_base_bdevs_operational": 2, 00:08:15.360 "base_bdevs_list": [ 00:08:15.360 { 00:08:15.360 "name": "BaseBdev1", 00:08:15.360 "uuid": "15508450-5642-42f6-989e-e9baa16a82ed", 00:08:15.360 "is_configured": true, 00:08:15.360 "data_offset": 2048, 00:08:15.360 "data_size": 63488 00:08:15.360 }, 00:08:15.360 { 00:08:15.360 "name": "BaseBdev2", 00:08:15.360 "uuid": "d663771a-51e2-4208-9ba0-3bf9922c00ab", 00:08:15.360 "is_configured": true, 00:08:15.360 "data_offset": 2048, 00:08:15.360 "data_size": 63488 00:08:15.360 } 00:08:15.360 ] 00:08:15.360 } 00:08:15.360 } 00:08:15.360 }' 00:08:15.360 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:15.620 BaseBdev2' 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.620 [2024-11-25 15:35:14.186388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:15.620 [2024-11-25 15:35:14.186530] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.620 [2024-11-25 15:35:14.186595] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.620 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.621 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.621 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.880 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.880 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.880 "name": "Existed_Raid", 00:08:15.880 "uuid": "0f9e78ee-ce28-4fd2-8ce5-cb1a5bbad019", 00:08:15.880 "strip_size_kb": 64, 00:08:15.880 "state": "offline", 00:08:15.880 "raid_level": "concat", 00:08:15.880 "superblock": true, 00:08:15.880 "num_base_bdevs": 2, 00:08:15.880 "num_base_bdevs_discovered": 1, 00:08:15.880 "num_base_bdevs_operational": 1, 00:08:15.880 "base_bdevs_list": [ 00:08:15.880 { 00:08:15.880 "name": null, 00:08:15.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.880 "is_configured": false, 00:08:15.880 "data_offset": 0, 00:08:15.880 "data_size": 63488 00:08:15.880 }, 00:08:15.880 { 00:08:15.880 "name": "BaseBdev2", 00:08:15.880 "uuid": "d663771a-51e2-4208-9ba0-3bf9922c00ab", 00:08:15.880 "is_configured": true, 00:08:15.880 "data_offset": 2048, 00:08:15.880 "data_size": 63488 00:08:15.880 } 00:08:15.880 ] 00:08:15.880 }' 00:08:15.880 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.880 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.141 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:16.141 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:16.141 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.141 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.141 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:16.141 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.141 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.141 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:16.141 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:16.141 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:16.141 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.141 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.141 [2024-11-25 15:35:14.778079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:16.141 [2024-11-25 15:35:14.778201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61759 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61759 ']' 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61759 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61759 00:08:16.401 killing process with pid 61759 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61759' 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61759 00:08:16.401 [2024-11-25 15:35:14.959923] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.401 15:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61759 00:08:16.401 [2024-11-25 15:35:14.976331] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.362 15:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:17.362 ************************************ 00:08:17.362 END TEST raid_state_function_test_sb 00:08:17.362 ************************************ 00:08:17.362 00:08:17.362 real 0m4.877s 00:08:17.362 user 0m7.095s 00:08:17.362 sys 0m0.753s 00:08:17.362 15:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.362 15:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.620 15:35:16 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:17.620 15:35:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:17.620 15:35:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.620 15:35:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.620 ************************************ 00:08:17.620 START TEST raid_superblock_test 00:08:17.620 ************************************ 00:08:17.620 15:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62005 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62005 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62005 ']' 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.621 15:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.621 [2024-11-25 15:35:16.176805] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:08:17.621 [2024-11-25 15:35:16.177016] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62005 ] 00:08:17.879 [2024-11-25 15:35:16.351308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.879 [2024-11-25 15:35:16.460568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.139 [2024-11-25 15:35:16.648479] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.139 [2024-11-25 15:35:16.648592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.398 15:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.398 15:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:18.398 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:18.398 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.398 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:18.398 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:18.398 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:18.398 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:18.398 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:18.398 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:18.398 15:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:18.398 15:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.398 15:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.398 malloc1 00:08:18.398 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.398 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:18.398 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.398 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.398 [2024-11-25 15:35:17.047477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:18.398 [2024-11-25 15:35:17.047623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.398 [2024-11-25 15:35:17.047663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:18.398 [2024-11-25 15:35:17.047700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.398 [2024-11-25 15:35:17.049755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.398 [2024-11-25 15:35:17.049836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:18.398 pt1 00:08:18.398 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.398 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:18.398 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.398 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:18.398 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:18.398 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:18.398 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:18.398 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:18.398 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:18.399 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:18.399 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.399 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.658 malloc2 00:08:18.658 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.658 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:18.658 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.658 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.658 [2024-11-25 15:35:17.104969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:18.658 [2024-11-25 15:35:17.105040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.658 [2024-11-25 15:35:17.105079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:18.658 [2024-11-25 15:35:17.105088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.659 [2024-11-25 15:35:17.107130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.659 [2024-11-25 15:35:17.107164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:18.659 pt2 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.659 [2024-11-25 15:35:17.117010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:18.659 [2024-11-25 15:35:17.118737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:18.659 [2024-11-25 15:35:17.118894] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:18.659 [2024-11-25 15:35:17.118907] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:18.659 [2024-11-25 15:35:17.119142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:18.659 [2024-11-25 15:35:17.119292] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:18.659 [2024-11-25 15:35:17.119303] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:18.659 [2024-11-25 15:35:17.119440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.659 "name": "raid_bdev1", 00:08:18.659 "uuid": "25892654-6240-4818-ba19-dc3b634f96b0", 00:08:18.659 "strip_size_kb": 64, 00:08:18.659 "state": "online", 00:08:18.659 "raid_level": "concat", 00:08:18.659 "superblock": true, 00:08:18.659 "num_base_bdevs": 2, 00:08:18.659 "num_base_bdevs_discovered": 2, 00:08:18.659 "num_base_bdevs_operational": 2, 00:08:18.659 "base_bdevs_list": [ 00:08:18.659 { 00:08:18.659 "name": "pt1", 00:08:18.659 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:18.659 "is_configured": true, 00:08:18.659 "data_offset": 2048, 00:08:18.659 "data_size": 63488 00:08:18.659 }, 00:08:18.659 { 00:08:18.659 "name": "pt2", 00:08:18.659 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:18.659 "is_configured": true, 00:08:18.659 "data_offset": 2048, 00:08:18.659 "data_size": 63488 00:08:18.659 } 00:08:18.659 ] 00:08:18.659 }' 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.659 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.919 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:18.919 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:18.919 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:18.919 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:18.919 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:18.919 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:18.919 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:18.919 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:18.919 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.919 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.919 [2024-11-25 15:35:17.572457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.919 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.919 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:18.919 "name": "raid_bdev1", 00:08:18.919 "aliases": [ 00:08:18.919 "25892654-6240-4818-ba19-dc3b634f96b0" 00:08:18.919 ], 00:08:18.919 "product_name": "Raid Volume", 00:08:18.919 "block_size": 512, 00:08:18.919 "num_blocks": 126976, 00:08:18.919 "uuid": "25892654-6240-4818-ba19-dc3b634f96b0", 00:08:18.919 "assigned_rate_limits": { 00:08:18.919 "rw_ios_per_sec": 0, 00:08:18.919 "rw_mbytes_per_sec": 0, 00:08:18.919 "r_mbytes_per_sec": 0, 00:08:18.919 "w_mbytes_per_sec": 0 00:08:18.919 }, 00:08:18.919 "claimed": false, 00:08:18.919 "zoned": false, 00:08:18.919 "supported_io_types": { 00:08:18.919 "read": true, 00:08:18.919 "write": true, 00:08:18.919 "unmap": true, 00:08:18.919 "flush": true, 00:08:18.919 "reset": true, 00:08:18.919 "nvme_admin": false, 00:08:18.919 "nvme_io": false, 00:08:18.919 "nvme_io_md": false, 00:08:18.919 "write_zeroes": true, 00:08:18.919 "zcopy": false, 00:08:18.919 "get_zone_info": false, 00:08:18.919 "zone_management": false, 00:08:18.919 "zone_append": false, 00:08:18.919 "compare": false, 00:08:18.919 "compare_and_write": false, 00:08:18.919 "abort": false, 00:08:18.919 "seek_hole": false, 00:08:18.919 "seek_data": false, 00:08:18.919 "copy": false, 00:08:18.919 "nvme_iov_md": false 00:08:18.919 }, 00:08:18.919 "memory_domains": [ 00:08:18.919 { 00:08:18.919 "dma_device_id": "system", 00:08:18.919 "dma_device_type": 1 00:08:18.919 }, 00:08:18.919 { 00:08:18.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.919 "dma_device_type": 2 00:08:18.919 }, 00:08:18.919 { 00:08:18.919 "dma_device_id": "system", 00:08:18.919 "dma_device_type": 1 00:08:18.919 }, 00:08:18.920 { 00:08:18.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.920 "dma_device_type": 2 00:08:18.920 } 00:08:18.920 ], 00:08:18.920 "driver_specific": { 00:08:18.920 "raid": { 00:08:18.920 "uuid": "25892654-6240-4818-ba19-dc3b634f96b0", 00:08:18.920 "strip_size_kb": 64, 00:08:18.920 "state": "online", 00:08:18.920 "raid_level": "concat", 00:08:18.920 "superblock": true, 00:08:18.920 "num_base_bdevs": 2, 00:08:18.920 "num_base_bdevs_discovered": 2, 00:08:18.920 "num_base_bdevs_operational": 2, 00:08:18.920 "base_bdevs_list": [ 00:08:18.920 { 00:08:18.920 "name": "pt1", 00:08:18.920 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:18.920 "is_configured": true, 00:08:18.920 "data_offset": 2048, 00:08:18.920 "data_size": 63488 00:08:18.920 }, 00:08:18.920 { 00:08:18.920 "name": "pt2", 00:08:18.920 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:18.920 "is_configured": true, 00:08:18.920 "data_offset": 2048, 00:08:18.920 "data_size": 63488 00:08:18.920 } 00:08:18.920 ] 00:08:18.920 } 00:08:18.920 } 00:08:18.920 }' 00:08:18.920 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:19.180 pt2' 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:19.180 [2024-11-25 15:35:17.784072] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=25892654-6240-4818-ba19-dc3b634f96b0 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 25892654-6240-4818-ba19-dc3b634f96b0 ']' 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.180 [2024-11-25 15:35:17.827705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.180 [2024-11-25 15:35:17.827768] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.180 [2024-11-25 15:35:17.827859] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.180 [2024-11-25 15:35:17.827934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.180 [2024-11-25 15:35:17.828018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.180 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.441 [2024-11-25 15:35:17.943531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:19.441 [2024-11-25 15:35:17.945362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:19.441 [2024-11-25 15:35:17.945421] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:19.441 [2024-11-25 15:35:17.945471] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:19.441 [2024-11-25 15:35:17.945486] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.441 [2024-11-25 15:35:17.945495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:19.441 request: 00:08:19.441 { 00:08:19.441 "name": "raid_bdev1", 00:08:19.441 "raid_level": "concat", 00:08:19.441 "base_bdevs": [ 00:08:19.441 "malloc1", 00:08:19.441 "malloc2" 00:08:19.441 ], 00:08:19.441 "strip_size_kb": 64, 00:08:19.441 "superblock": false, 00:08:19.441 "method": "bdev_raid_create", 00:08:19.441 "req_id": 1 00:08:19.441 } 00:08:19.441 Got JSON-RPC error response 00:08:19.441 response: 00:08:19.441 { 00:08:19.441 "code": -17, 00:08:19.441 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:19.441 } 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.441 15:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.441 [2024-11-25 15:35:18.007402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:19.441 [2024-11-25 15:35:18.007494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.441 [2024-11-25 15:35:18.007546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:19.441 [2024-11-25 15:35:18.007577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.441 [2024-11-25 15:35:18.009717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.442 [2024-11-25 15:35:18.009789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:19.442 [2024-11-25 15:35:18.009879] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:19.442 [2024-11-25 15:35:18.009984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:19.442 pt1 00:08:19.442 15:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.442 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:19.442 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.442 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.442 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.442 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.442 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.442 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.442 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.442 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.442 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.442 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.442 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.442 15:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.442 15:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.442 15:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.442 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.442 "name": "raid_bdev1", 00:08:19.442 "uuid": "25892654-6240-4818-ba19-dc3b634f96b0", 00:08:19.442 "strip_size_kb": 64, 00:08:19.442 "state": "configuring", 00:08:19.442 "raid_level": "concat", 00:08:19.442 "superblock": true, 00:08:19.442 "num_base_bdevs": 2, 00:08:19.442 "num_base_bdevs_discovered": 1, 00:08:19.442 "num_base_bdevs_operational": 2, 00:08:19.442 "base_bdevs_list": [ 00:08:19.442 { 00:08:19.442 "name": "pt1", 00:08:19.442 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.442 "is_configured": true, 00:08:19.442 "data_offset": 2048, 00:08:19.442 "data_size": 63488 00:08:19.442 }, 00:08:19.442 { 00:08:19.442 "name": null, 00:08:19.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.442 "is_configured": false, 00:08:19.442 "data_offset": 2048, 00:08:19.442 "data_size": 63488 00:08:19.442 } 00:08:19.442 ] 00:08:19.442 }' 00:08:19.442 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.442 15:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.011 [2024-11-25 15:35:18.398760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:20.011 [2024-11-25 15:35:18.398880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.011 [2024-11-25 15:35:18.398926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:20.011 [2024-11-25 15:35:18.398942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.011 [2024-11-25 15:35:18.399419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.011 [2024-11-25 15:35:18.399442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:20.011 [2024-11-25 15:35:18.399521] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:20.011 [2024-11-25 15:35:18.399545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:20.011 [2024-11-25 15:35:18.399649] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:20.011 [2024-11-25 15:35:18.399660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:20.011 [2024-11-25 15:35:18.399893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:20.011 [2024-11-25 15:35:18.400068] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:20.011 [2024-11-25 15:35:18.400079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:20.011 [2024-11-25 15:35:18.400221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.011 pt2 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.011 "name": "raid_bdev1", 00:08:20.011 "uuid": "25892654-6240-4818-ba19-dc3b634f96b0", 00:08:20.011 "strip_size_kb": 64, 00:08:20.011 "state": "online", 00:08:20.011 "raid_level": "concat", 00:08:20.011 "superblock": true, 00:08:20.011 "num_base_bdevs": 2, 00:08:20.011 "num_base_bdevs_discovered": 2, 00:08:20.011 "num_base_bdevs_operational": 2, 00:08:20.011 "base_bdevs_list": [ 00:08:20.011 { 00:08:20.011 "name": "pt1", 00:08:20.011 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.011 "is_configured": true, 00:08:20.011 "data_offset": 2048, 00:08:20.011 "data_size": 63488 00:08:20.011 }, 00:08:20.011 { 00:08:20.011 "name": "pt2", 00:08:20.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.011 "is_configured": true, 00:08:20.011 "data_offset": 2048, 00:08:20.011 "data_size": 63488 00:08:20.011 } 00:08:20.011 ] 00:08:20.011 }' 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.011 15:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.271 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:20.271 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:20.271 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:20.271 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:20.271 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:20.271 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:20.271 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.271 15:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.271 15:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.271 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:20.271 [2024-11-25 15:35:18.854237] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.271 15:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.271 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:20.271 "name": "raid_bdev1", 00:08:20.271 "aliases": [ 00:08:20.271 "25892654-6240-4818-ba19-dc3b634f96b0" 00:08:20.271 ], 00:08:20.271 "product_name": "Raid Volume", 00:08:20.271 "block_size": 512, 00:08:20.271 "num_blocks": 126976, 00:08:20.271 "uuid": "25892654-6240-4818-ba19-dc3b634f96b0", 00:08:20.271 "assigned_rate_limits": { 00:08:20.271 "rw_ios_per_sec": 0, 00:08:20.271 "rw_mbytes_per_sec": 0, 00:08:20.271 "r_mbytes_per_sec": 0, 00:08:20.271 "w_mbytes_per_sec": 0 00:08:20.271 }, 00:08:20.271 "claimed": false, 00:08:20.271 "zoned": false, 00:08:20.271 "supported_io_types": { 00:08:20.271 "read": true, 00:08:20.271 "write": true, 00:08:20.271 "unmap": true, 00:08:20.271 "flush": true, 00:08:20.271 "reset": true, 00:08:20.271 "nvme_admin": false, 00:08:20.271 "nvme_io": false, 00:08:20.271 "nvme_io_md": false, 00:08:20.271 "write_zeroes": true, 00:08:20.271 "zcopy": false, 00:08:20.271 "get_zone_info": false, 00:08:20.271 "zone_management": false, 00:08:20.271 "zone_append": false, 00:08:20.271 "compare": false, 00:08:20.271 "compare_and_write": false, 00:08:20.271 "abort": false, 00:08:20.271 "seek_hole": false, 00:08:20.271 "seek_data": false, 00:08:20.271 "copy": false, 00:08:20.271 "nvme_iov_md": false 00:08:20.271 }, 00:08:20.271 "memory_domains": [ 00:08:20.271 { 00:08:20.271 "dma_device_id": "system", 00:08:20.271 "dma_device_type": 1 00:08:20.271 }, 00:08:20.271 { 00:08:20.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.271 "dma_device_type": 2 00:08:20.271 }, 00:08:20.271 { 00:08:20.271 "dma_device_id": "system", 00:08:20.271 "dma_device_type": 1 00:08:20.271 }, 00:08:20.271 { 00:08:20.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.271 "dma_device_type": 2 00:08:20.271 } 00:08:20.271 ], 00:08:20.271 "driver_specific": { 00:08:20.271 "raid": { 00:08:20.271 "uuid": "25892654-6240-4818-ba19-dc3b634f96b0", 00:08:20.271 "strip_size_kb": 64, 00:08:20.271 "state": "online", 00:08:20.271 "raid_level": "concat", 00:08:20.271 "superblock": true, 00:08:20.271 "num_base_bdevs": 2, 00:08:20.271 "num_base_bdevs_discovered": 2, 00:08:20.271 "num_base_bdevs_operational": 2, 00:08:20.271 "base_bdevs_list": [ 00:08:20.271 { 00:08:20.271 "name": "pt1", 00:08:20.271 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.271 "is_configured": true, 00:08:20.271 "data_offset": 2048, 00:08:20.271 "data_size": 63488 00:08:20.271 }, 00:08:20.271 { 00:08:20.271 "name": "pt2", 00:08:20.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.272 "is_configured": true, 00:08:20.272 "data_offset": 2048, 00:08:20.272 "data_size": 63488 00:08:20.272 } 00:08:20.272 ] 00:08:20.272 } 00:08:20.272 } 00:08:20.272 }' 00:08:20.272 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:20.272 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:20.272 pt2' 00:08:20.272 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.532 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:20.532 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.532 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.532 15:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:20.532 15:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.532 15:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.532 15:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.532 [2024-11-25 15:35:19.065869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 25892654-6240-4818-ba19-dc3b634f96b0 '!=' 25892654-6240-4818-ba19-dc3b634f96b0 ']' 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62005 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62005 ']' 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62005 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62005 00:08:20.532 killing process with pid 62005 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62005' 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62005 00:08:20.532 [2024-11-25 15:35:19.152368] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.532 [2024-11-25 15:35:19.152460] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.532 [2024-11-25 15:35:19.152510] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.532 [2024-11-25 15:35:19.152522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:20.532 15:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62005 00:08:20.810 [2024-11-25 15:35:19.348472] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.749 15:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:21.749 00:08:21.749 real 0m4.306s 00:08:21.749 user 0m6.043s 00:08:21.749 sys 0m0.718s 00:08:21.749 ************************************ 00:08:21.749 END TEST raid_superblock_test 00:08:21.749 ************************************ 00:08:21.749 15:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.749 15:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.008 15:35:20 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:22.008 15:35:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:22.008 15:35:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.008 15:35:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:22.008 ************************************ 00:08:22.008 START TEST raid_read_error_test 00:08:22.008 ************************************ 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KWJcNXBW9n 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62217 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62217 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62217 ']' 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.008 15:35:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.008 [2024-11-25 15:35:20.564207] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:08:22.008 [2024-11-25 15:35:20.564397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62217 ] 00:08:22.266 [2024-11-25 15:35:20.739356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.266 [2024-11-25 15:35:20.846954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.524 [2024-11-25 15:35:21.042852] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.524 [2024-11-25 15:35:21.042972] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.783 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.783 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:22.783 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.783 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:22.783 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.784 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.784 BaseBdev1_malloc 00:08:22.784 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.784 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:22.784 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.784 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.784 true 00:08:22.784 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.784 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:22.784 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.784 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.784 [2024-11-25 15:35:21.441559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:22.784 [2024-11-25 15:35:21.441654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.784 [2024-11-25 15:35:21.441690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:22.784 [2024-11-25 15:35:21.441720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.784 [2024-11-25 15:35:21.443883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.784 [2024-11-25 15:35:21.443970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:22.784 BaseBdev1 00:08:22.784 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.784 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.784 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:22.784 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.784 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.043 BaseBdev2_malloc 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.043 true 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.043 [2024-11-25 15:35:21.508671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:23.043 [2024-11-25 15:35:21.508728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.043 [2024-11-25 15:35:21.508746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:23.043 [2024-11-25 15:35:21.508756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.043 [2024-11-25 15:35:21.510780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.043 [2024-11-25 15:35:21.510823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:23.043 BaseBdev2 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.043 [2024-11-25 15:35:21.520715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.043 [2024-11-25 15:35:21.522528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.043 [2024-11-25 15:35:21.522730] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:23.043 [2024-11-25 15:35:21.522745] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:23.043 [2024-11-25 15:35:21.522960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:23.043 [2024-11-25 15:35:21.523148] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:23.043 [2024-11-25 15:35:21.523160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:23.043 [2024-11-25 15:35:21.523304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.043 "name": "raid_bdev1", 00:08:23.043 "uuid": "7c035ee4-4327-4771-807e-5d52fae80dbd", 00:08:23.043 "strip_size_kb": 64, 00:08:23.043 "state": "online", 00:08:23.043 "raid_level": "concat", 00:08:23.043 "superblock": true, 00:08:23.043 "num_base_bdevs": 2, 00:08:23.043 "num_base_bdevs_discovered": 2, 00:08:23.043 "num_base_bdevs_operational": 2, 00:08:23.043 "base_bdevs_list": [ 00:08:23.043 { 00:08:23.043 "name": "BaseBdev1", 00:08:23.043 "uuid": "98052b89-ba0e-5833-a682-7940f4516dfa", 00:08:23.043 "is_configured": true, 00:08:23.043 "data_offset": 2048, 00:08:23.043 "data_size": 63488 00:08:23.043 }, 00:08:23.043 { 00:08:23.043 "name": "BaseBdev2", 00:08:23.043 "uuid": "187d5cb4-f27a-5515-809e-5d900400df06", 00:08:23.043 "is_configured": true, 00:08:23.043 "data_offset": 2048, 00:08:23.043 "data_size": 63488 00:08:23.043 } 00:08:23.043 ] 00:08:23.043 }' 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.043 15:35:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.302 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:23.302 15:35:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:23.562 [2024-11-25 15:35:22.001114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.501 "name": "raid_bdev1", 00:08:24.501 "uuid": "7c035ee4-4327-4771-807e-5d52fae80dbd", 00:08:24.501 "strip_size_kb": 64, 00:08:24.501 "state": "online", 00:08:24.501 "raid_level": "concat", 00:08:24.501 "superblock": true, 00:08:24.501 "num_base_bdevs": 2, 00:08:24.501 "num_base_bdevs_discovered": 2, 00:08:24.501 "num_base_bdevs_operational": 2, 00:08:24.501 "base_bdevs_list": [ 00:08:24.501 { 00:08:24.501 "name": "BaseBdev1", 00:08:24.501 "uuid": "98052b89-ba0e-5833-a682-7940f4516dfa", 00:08:24.501 "is_configured": true, 00:08:24.501 "data_offset": 2048, 00:08:24.501 "data_size": 63488 00:08:24.501 }, 00:08:24.501 { 00:08:24.501 "name": "BaseBdev2", 00:08:24.501 "uuid": "187d5cb4-f27a-5515-809e-5d900400df06", 00:08:24.501 "is_configured": true, 00:08:24.501 "data_offset": 2048, 00:08:24.501 "data_size": 63488 00:08:24.501 } 00:08:24.501 ] 00:08:24.501 }' 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.501 15:35:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.762 15:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:24.762 15:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.762 15:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.762 [2024-11-25 15:35:23.351060] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.762 [2024-11-25 15:35:23.351094] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.762 [2024-11-25 15:35:23.353740] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.762 [2024-11-25 15:35:23.353784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.762 [2024-11-25 15:35:23.353815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.762 [2024-11-25 15:35:23.353829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:24.762 { 00:08:24.762 "results": [ 00:08:24.762 { 00:08:24.762 "job": "raid_bdev1", 00:08:24.762 "core_mask": "0x1", 00:08:24.762 "workload": "randrw", 00:08:24.762 "percentage": 50, 00:08:24.762 "status": "finished", 00:08:24.762 "queue_depth": 1, 00:08:24.762 "io_size": 131072, 00:08:24.762 "runtime": 1.350699, 00:08:24.762 "iops": 17245.14492125929, 00:08:24.762 "mibps": 2155.643115157411, 00:08:24.762 "io_failed": 1, 00:08:24.762 "io_timeout": 0, 00:08:24.762 "avg_latency_us": 80.46389725712302, 00:08:24.762 "min_latency_us": 24.258515283842794, 00:08:24.762 "max_latency_us": 1373.6803493449781 00:08:24.762 } 00:08:24.762 ], 00:08:24.762 "core_count": 1 00:08:24.762 } 00:08:24.762 15:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.762 15:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62217 00:08:24.762 15:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62217 ']' 00:08:24.762 15:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62217 00:08:24.762 15:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:24.762 15:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.762 15:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62217 00:08:24.762 killing process with pid 62217 00:08:24.762 15:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.762 15:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.762 15:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62217' 00:08:24.762 15:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62217 00:08:24.762 [2024-11-25 15:35:23.402423] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.762 15:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62217 00:08:25.022 [2024-11-25 15:35:23.537333] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.970 15:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KWJcNXBW9n 00:08:26.230 15:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:26.230 15:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:26.230 15:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:26.230 15:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:26.230 15:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:26.230 15:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:26.230 15:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:26.230 ************************************ 00:08:26.230 END TEST raid_read_error_test 00:08:26.230 ************************************ 00:08:26.230 00:08:26.230 real 0m4.201s 00:08:26.230 user 0m5.006s 00:08:26.230 sys 0m0.506s 00:08:26.230 15:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.230 15:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.230 15:35:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:26.230 15:35:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:26.230 15:35:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.230 15:35:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.230 ************************************ 00:08:26.230 START TEST raid_write_error_test 00:08:26.230 ************************************ 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4vjEiZsirV 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62357 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62357 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62357 ']' 00:08:26.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.230 15:35:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.230 [2024-11-25 15:35:24.839427] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:08:26.230 [2024-11-25 15:35:24.839533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62357 ] 00:08:26.490 [2024-11-25 15:35:24.995436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.490 [2024-11-25 15:35:25.097597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.749 [2024-11-25 15:35:25.294091] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.749 [2024-11-25 15:35:25.294143] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.008 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.008 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:27.008 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:27.008 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:27.008 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.008 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.266 BaseBdev1_malloc 00:08:27.266 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.266 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:27.266 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.266 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.266 true 00:08:27.266 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.266 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:27.266 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.266 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.266 [2024-11-25 15:35:25.715363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:27.267 [2024-11-25 15:35:25.715422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.267 [2024-11-25 15:35:25.715441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:27.267 [2024-11-25 15:35:25.715452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.267 [2024-11-25 15:35:25.717506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.267 [2024-11-25 15:35:25.717596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:27.267 BaseBdev1 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.267 BaseBdev2_malloc 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.267 true 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.267 [2024-11-25 15:35:25.776000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:27.267 [2024-11-25 15:35:25.776127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.267 [2024-11-25 15:35:25.776148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:27.267 [2024-11-25 15:35:25.776158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.267 [2024-11-25 15:35:25.778187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.267 [2024-11-25 15:35:25.778226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:27.267 BaseBdev2 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.267 [2024-11-25 15:35:25.788052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.267 [2024-11-25 15:35:25.789830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:27.267 [2024-11-25 15:35:25.790030] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:27.267 [2024-11-25 15:35:25.790048] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:27.267 [2024-11-25 15:35:25.790279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:27.267 [2024-11-25 15:35:25.790485] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:27.267 [2024-11-25 15:35:25.790506] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:27.267 [2024-11-25 15:35:25.790654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.267 "name": "raid_bdev1", 00:08:27.267 "uuid": "f09008d0-f80f-473a-9cfc-9f303fb196ff", 00:08:27.267 "strip_size_kb": 64, 00:08:27.267 "state": "online", 00:08:27.267 "raid_level": "concat", 00:08:27.267 "superblock": true, 00:08:27.267 "num_base_bdevs": 2, 00:08:27.267 "num_base_bdevs_discovered": 2, 00:08:27.267 "num_base_bdevs_operational": 2, 00:08:27.267 "base_bdevs_list": [ 00:08:27.267 { 00:08:27.267 "name": "BaseBdev1", 00:08:27.267 "uuid": "70ef7c26-cd8d-586b-baaa-ddf062dfd2c3", 00:08:27.267 "is_configured": true, 00:08:27.267 "data_offset": 2048, 00:08:27.267 "data_size": 63488 00:08:27.267 }, 00:08:27.267 { 00:08:27.267 "name": "BaseBdev2", 00:08:27.267 "uuid": "7a746dee-073b-5142-bf8b-e61e00be5e64", 00:08:27.267 "is_configured": true, 00:08:27.267 "data_offset": 2048, 00:08:27.267 "data_size": 63488 00:08:27.267 } 00:08:27.267 ] 00:08:27.267 }' 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.267 15:35:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.525 15:35:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:27.525 15:35:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:27.783 [2024-11-25 15:35:26.276372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:28.717 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:28.717 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.717 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.717 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.717 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:28.717 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:28.717 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:28.717 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:28.718 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.718 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.718 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.718 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.718 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.718 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.718 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.718 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.718 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.718 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.718 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.718 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.718 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.718 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.718 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.718 "name": "raid_bdev1", 00:08:28.718 "uuid": "f09008d0-f80f-473a-9cfc-9f303fb196ff", 00:08:28.718 "strip_size_kb": 64, 00:08:28.718 "state": "online", 00:08:28.718 "raid_level": "concat", 00:08:28.718 "superblock": true, 00:08:28.718 "num_base_bdevs": 2, 00:08:28.718 "num_base_bdevs_discovered": 2, 00:08:28.718 "num_base_bdevs_operational": 2, 00:08:28.718 "base_bdevs_list": [ 00:08:28.718 { 00:08:28.718 "name": "BaseBdev1", 00:08:28.718 "uuid": "70ef7c26-cd8d-586b-baaa-ddf062dfd2c3", 00:08:28.718 "is_configured": true, 00:08:28.718 "data_offset": 2048, 00:08:28.718 "data_size": 63488 00:08:28.718 }, 00:08:28.718 { 00:08:28.718 "name": "BaseBdev2", 00:08:28.718 "uuid": "7a746dee-073b-5142-bf8b-e61e00be5e64", 00:08:28.718 "is_configured": true, 00:08:28.718 "data_offset": 2048, 00:08:28.718 "data_size": 63488 00:08:28.718 } 00:08:28.718 ] 00:08:28.718 }' 00:08:28.718 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.718 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.976 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:28.976 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.976 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.976 [2024-11-25 15:35:27.607820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:28.976 [2024-11-25 15:35:27.607934] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.976 [2024-11-25 15:35:27.610763] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.976 [2024-11-25 15:35:27.610852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.976 [2024-11-25 15:35:27.610905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.976 [2024-11-25 15:35:27.610958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:28.976 { 00:08:28.976 "results": [ 00:08:28.976 { 00:08:28.976 "job": "raid_bdev1", 00:08:28.976 "core_mask": "0x1", 00:08:28.976 "workload": "randrw", 00:08:28.976 "percentage": 50, 00:08:28.976 "status": "finished", 00:08:28.976 "queue_depth": 1, 00:08:28.976 "io_size": 131072, 00:08:28.976 "runtime": 1.332422, 00:08:28.976 "iops": 17098.18661054831, 00:08:28.976 "mibps": 2137.2733263185387, 00:08:28.976 "io_failed": 1, 00:08:28.976 "io_timeout": 0, 00:08:28.976 "avg_latency_us": 81.05841515555822, 00:08:28.976 "min_latency_us": 24.482096069868994, 00:08:28.976 "max_latency_us": 1395.1441048034935 00:08:28.976 } 00:08:28.976 ], 00:08:28.976 "core_count": 1 00:08:28.976 } 00:08:28.976 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.976 15:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62357 00:08:28.976 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62357 ']' 00:08:28.976 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62357 00:08:28.976 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:28.976 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.976 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62357 00:08:29.235 killing process with pid 62357 00:08:29.235 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.235 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.235 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62357' 00:08:29.235 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62357 00:08:29.235 [2024-11-25 15:35:27.657625] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.235 15:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62357 00:08:29.235 [2024-11-25 15:35:27.785694] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.609 15:35:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:30.609 15:35:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4vjEiZsirV 00:08:30.609 15:35:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:30.609 15:35:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:30.609 15:35:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:30.609 15:35:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:30.609 15:35:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:30.609 15:35:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:30.609 ************************************ 00:08:30.609 END TEST raid_write_error_test 00:08:30.609 ************************************ 00:08:30.609 00:08:30.609 real 0m4.160s 00:08:30.609 user 0m4.934s 00:08:30.609 sys 0m0.525s 00:08:30.609 15:35:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.609 15:35:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.609 15:35:28 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:30.609 15:35:28 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:30.609 15:35:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:30.609 15:35:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.609 15:35:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.609 ************************************ 00:08:30.609 START TEST raid_state_function_test 00:08:30.609 ************************************ 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:30.609 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:30.610 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62495 00:08:30.610 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:30.610 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62495' 00:08:30.610 Process raid pid: 62495 00:08:30.610 15:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62495 00:08:30.610 15:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62495 ']' 00:08:30.610 15:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.610 15:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.610 15:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.610 15:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.610 15:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.610 [2024-11-25 15:35:29.067272] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:08:30.610 [2024-11-25 15:35:29.067464] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.610 [2024-11-25 15:35:29.240622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.868 [2024-11-25 15:35:29.346070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.868 [2024-11-25 15:35:29.537315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.868 [2024-11-25 15:35:29.537426] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.434 15:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.434 15:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:31.434 15:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:31.434 15:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.434 15:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.434 [2024-11-25 15:35:29.877733] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:31.434 [2024-11-25 15:35:29.877861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:31.434 [2024-11-25 15:35:29.877894] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:31.434 [2024-11-25 15:35:29.877918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:31.435 15:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.435 15:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:31.435 15:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.435 15:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.435 15:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.435 15:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.435 15:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.435 15:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.435 15:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.435 15:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.435 15:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.435 15:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.435 15:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.435 15:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.435 15:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.435 15:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.435 15:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.435 "name": "Existed_Raid", 00:08:31.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.435 "strip_size_kb": 0, 00:08:31.435 "state": "configuring", 00:08:31.435 "raid_level": "raid1", 00:08:31.435 "superblock": false, 00:08:31.435 "num_base_bdevs": 2, 00:08:31.435 "num_base_bdevs_discovered": 0, 00:08:31.435 "num_base_bdevs_operational": 2, 00:08:31.435 "base_bdevs_list": [ 00:08:31.435 { 00:08:31.435 "name": "BaseBdev1", 00:08:31.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.435 "is_configured": false, 00:08:31.435 "data_offset": 0, 00:08:31.435 "data_size": 0 00:08:31.435 }, 00:08:31.435 { 00:08:31.435 "name": "BaseBdev2", 00:08:31.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.435 "is_configured": false, 00:08:31.435 "data_offset": 0, 00:08:31.435 "data_size": 0 00:08:31.435 } 00:08:31.435 ] 00:08:31.435 }' 00:08:31.435 15:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.435 15:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.693 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:31.693 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.693 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.693 [2024-11-25 15:35:30.308953] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:31.693 [2024-11-25 15:35:30.309051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:31.693 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.693 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:31.693 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.693 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.693 [2024-11-25 15:35:30.320919] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:31.694 [2024-11-25 15:35:30.320961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:31.694 [2024-11-25 15:35:30.320970] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:31.694 [2024-11-25 15:35:30.320997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:31.694 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.694 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:31.694 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.694 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.694 [2024-11-25 15:35:30.367096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:31.694 BaseBdev1 00:08:31.694 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.694 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:31.694 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:31.694 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:31.694 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:31.694 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:31.694 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:31.694 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:31.694 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.694 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.953 [ 00:08:31.953 { 00:08:31.953 "name": "BaseBdev1", 00:08:31.953 "aliases": [ 00:08:31.953 "31b720a4-b907-41f5-a5ea-59d8650e7a12" 00:08:31.953 ], 00:08:31.953 "product_name": "Malloc disk", 00:08:31.953 "block_size": 512, 00:08:31.953 "num_blocks": 65536, 00:08:31.953 "uuid": "31b720a4-b907-41f5-a5ea-59d8650e7a12", 00:08:31.953 "assigned_rate_limits": { 00:08:31.953 "rw_ios_per_sec": 0, 00:08:31.953 "rw_mbytes_per_sec": 0, 00:08:31.953 "r_mbytes_per_sec": 0, 00:08:31.953 "w_mbytes_per_sec": 0 00:08:31.953 }, 00:08:31.953 "claimed": true, 00:08:31.953 "claim_type": "exclusive_write", 00:08:31.953 "zoned": false, 00:08:31.953 "supported_io_types": { 00:08:31.953 "read": true, 00:08:31.953 "write": true, 00:08:31.953 "unmap": true, 00:08:31.953 "flush": true, 00:08:31.953 "reset": true, 00:08:31.953 "nvme_admin": false, 00:08:31.953 "nvme_io": false, 00:08:31.953 "nvme_io_md": false, 00:08:31.953 "write_zeroes": true, 00:08:31.953 "zcopy": true, 00:08:31.953 "get_zone_info": false, 00:08:31.953 "zone_management": false, 00:08:31.953 "zone_append": false, 00:08:31.953 "compare": false, 00:08:31.953 "compare_and_write": false, 00:08:31.953 "abort": true, 00:08:31.953 "seek_hole": false, 00:08:31.953 "seek_data": false, 00:08:31.953 "copy": true, 00:08:31.953 "nvme_iov_md": false 00:08:31.953 }, 00:08:31.953 "memory_domains": [ 00:08:31.953 { 00:08:31.953 "dma_device_id": "system", 00:08:31.953 "dma_device_type": 1 00:08:31.953 }, 00:08:31.953 { 00:08:31.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.953 "dma_device_type": 2 00:08:31.953 } 00:08:31.953 ], 00:08:31.953 "driver_specific": {} 00:08:31.953 } 00:08:31.953 ] 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.953 "name": "Existed_Raid", 00:08:31.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.953 "strip_size_kb": 0, 00:08:31.953 "state": "configuring", 00:08:31.953 "raid_level": "raid1", 00:08:31.953 "superblock": false, 00:08:31.953 "num_base_bdevs": 2, 00:08:31.953 "num_base_bdevs_discovered": 1, 00:08:31.953 "num_base_bdevs_operational": 2, 00:08:31.953 "base_bdevs_list": [ 00:08:31.953 { 00:08:31.953 "name": "BaseBdev1", 00:08:31.953 "uuid": "31b720a4-b907-41f5-a5ea-59d8650e7a12", 00:08:31.953 "is_configured": true, 00:08:31.953 "data_offset": 0, 00:08:31.953 "data_size": 65536 00:08:31.953 }, 00:08:31.953 { 00:08:31.953 "name": "BaseBdev2", 00:08:31.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.953 "is_configured": false, 00:08:31.953 "data_offset": 0, 00:08:31.953 "data_size": 0 00:08:31.953 } 00:08:31.953 ] 00:08:31.953 }' 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.953 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.213 [2024-11-25 15:35:30.818391] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.213 [2024-11-25 15:35:30.818489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.213 [2024-11-25 15:35:30.830408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.213 [2024-11-25 15:35:30.832256] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.213 [2024-11-25 15:35:30.832331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.213 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.214 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.214 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.214 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.214 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.214 "name": "Existed_Raid", 00:08:32.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.214 "strip_size_kb": 0, 00:08:32.214 "state": "configuring", 00:08:32.214 "raid_level": "raid1", 00:08:32.214 "superblock": false, 00:08:32.214 "num_base_bdevs": 2, 00:08:32.214 "num_base_bdevs_discovered": 1, 00:08:32.214 "num_base_bdevs_operational": 2, 00:08:32.214 "base_bdevs_list": [ 00:08:32.214 { 00:08:32.214 "name": "BaseBdev1", 00:08:32.214 "uuid": "31b720a4-b907-41f5-a5ea-59d8650e7a12", 00:08:32.214 "is_configured": true, 00:08:32.214 "data_offset": 0, 00:08:32.214 "data_size": 65536 00:08:32.214 }, 00:08:32.214 { 00:08:32.214 "name": "BaseBdev2", 00:08:32.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.214 "is_configured": false, 00:08:32.214 "data_offset": 0, 00:08:32.214 "data_size": 0 00:08:32.214 } 00:08:32.214 ] 00:08:32.214 }' 00:08:32.214 15:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.214 15:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.785 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:32.785 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.785 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.785 [2024-11-25 15:35:31.351549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:32.785 [2024-11-25 15:35:31.351692] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:32.785 [2024-11-25 15:35:31.351724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:32.785 [2024-11-25 15:35:31.352066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:32.785 [2024-11-25 15:35:31.352294] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:32.785 [2024-11-25 15:35:31.352349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:32.785 [2024-11-25 15:35:31.352660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.785 BaseBdev2 00:08:32.785 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.785 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:32.785 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:32.785 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:32.785 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:32.785 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:32.785 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:32.785 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:32.785 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.785 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.785 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.785 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:32.785 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.785 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.785 [ 00:08:32.785 { 00:08:32.785 "name": "BaseBdev2", 00:08:32.785 "aliases": [ 00:08:32.785 "8ce5596d-6319-4563-9668-ab796ef015fb" 00:08:32.785 ], 00:08:32.785 "product_name": "Malloc disk", 00:08:32.785 "block_size": 512, 00:08:32.785 "num_blocks": 65536, 00:08:32.785 "uuid": "8ce5596d-6319-4563-9668-ab796ef015fb", 00:08:32.785 "assigned_rate_limits": { 00:08:32.785 "rw_ios_per_sec": 0, 00:08:32.785 "rw_mbytes_per_sec": 0, 00:08:32.785 "r_mbytes_per_sec": 0, 00:08:32.785 "w_mbytes_per_sec": 0 00:08:32.785 }, 00:08:32.785 "claimed": true, 00:08:32.785 "claim_type": "exclusive_write", 00:08:32.785 "zoned": false, 00:08:32.785 "supported_io_types": { 00:08:32.785 "read": true, 00:08:32.785 "write": true, 00:08:32.785 "unmap": true, 00:08:32.785 "flush": true, 00:08:32.785 "reset": true, 00:08:32.785 "nvme_admin": false, 00:08:32.785 "nvme_io": false, 00:08:32.785 "nvme_io_md": false, 00:08:32.785 "write_zeroes": true, 00:08:32.785 "zcopy": true, 00:08:32.785 "get_zone_info": false, 00:08:32.785 "zone_management": false, 00:08:32.785 "zone_append": false, 00:08:32.785 "compare": false, 00:08:32.785 "compare_and_write": false, 00:08:32.786 "abort": true, 00:08:32.786 "seek_hole": false, 00:08:32.786 "seek_data": false, 00:08:32.786 "copy": true, 00:08:32.786 "nvme_iov_md": false 00:08:32.786 }, 00:08:32.786 "memory_domains": [ 00:08:32.786 { 00:08:32.786 "dma_device_id": "system", 00:08:32.786 "dma_device_type": 1 00:08:32.786 }, 00:08:32.786 { 00:08:32.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.786 "dma_device_type": 2 00:08:32.786 } 00:08:32.786 ], 00:08:32.786 "driver_specific": {} 00:08:32.786 } 00:08:32.786 ] 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.786 "name": "Existed_Raid", 00:08:32.786 "uuid": "2479a4a7-19fd-4f2d-9e00-2afddc163e3a", 00:08:32.786 "strip_size_kb": 0, 00:08:32.786 "state": "online", 00:08:32.786 "raid_level": "raid1", 00:08:32.786 "superblock": false, 00:08:32.786 "num_base_bdevs": 2, 00:08:32.786 "num_base_bdevs_discovered": 2, 00:08:32.786 "num_base_bdevs_operational": 2, 00:08:32.786 "base_bdevs_list": [ 00:08:32.786 { 00:08:32.786 "name": "BaseBdev1", 00:08:32.786 "uuid": "31b720a4-b907-41f5-a5ea-59d8650e7a12", 00:08:32.786 "is_configured": true, 00:08:32.786 "data_offset": 0, 00:08:32.786 "data_size": 65536 00:08:32.786 }, 00:08:32.786 { 00:08:32.786 "name": "BaseBdev2", 00:08:32.786 "uuid": "8ce5596d-6319-4563-9668-ab796ef015fb", 00:08:32.786 "is_configured": true, 00:08:32.786 "data_offset": 0, 00:08:32.786 "data_size": 65536 00:08:32.786 } 00:08:32.786 ] 00:08:32.786 }' 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.786 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.369 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:33.369 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:33.369 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:33.369 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:33.369 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:33.369 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:33.369 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:33.369 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:33.369 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.369 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.369 [2024-11-25 15:35:31.846968] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.369 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.369 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:33.369 "name": "Existed_Raid", 00:08:33.369 "aliases": [ 00:08:33.369 "2479a4a7-19fd-4f2d-9e00-2afddc163e3a" 00:08:33.369 ], 00:08:33.369 "product_name": "Raid Volume", 00:08:33.369 "block_size": 512, 00:08:33.369 "num_blocks": 65536, 00:08:33.369 "uuid": "2479a4a7-19fd-4f2d-9e00-2afddc163e3a", 00:08:33.369 "assigned_rate_limits": { 00:08:33.369 "rw_ios_per_sec": 0, 00:08:33.369 "rw_mbytes_per_sec": 0, 00:08:33.369 "r_mbytes_per_sec": 0, 00:08:33.369 "w_mbytes_per_sec": 0 00:08:33.369 }, 00:08:33.369 "claimed": false, 00:08:33.369 "zoned": false, 00:08:33.369 "supported_io_types": { 00:08:33.369 "read": true, 00:08:33.369 "write": true, 00:08:33.369 "unmap": false, 00:08:33.369 "flush": false, 00:08:33.369 "reset": true, 00:08:33.369 "nvme_admin": false, 00:08:33.369 "nvme_io": false, 00:08:33.369 "nvme_io_md": false, 00:08:33.369 "write_zeroes": true, 00:08:33.369 "zcopy": false, 00:08:33.369 "get_zone_info": false, 00:08:33.369 "zone_management": false, 00:08:33.369 "zone_append": false, 00:08:33.369 "compare": false, 00:08:33.369 "compare_and_write": false, 00:08:33.369 "abort": false, 00:08:33.369 "seek_hole": false, 00:08:33.369 "seek_data": false, 00:08:33.369 "copy": false, 00:08:33.369 "nvme_iov_md": false 00:08:33.369 }, 00:08:33.369 "memory_domains": [ 00:08:33.369 { 00:08:33.369 "dma_device_id": "system", 00:08:33.369 "dma_device_type": 1 00:08:33.369 }, 00:08:33.369 { 00:08:33.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.369 "dma_device_type": 2 00:08:33.369 }, 00:08:33.369 { 00:08:33.369 "dma_device_id": "system", 00:08:33.369 "dma_device_type": 1 00:08:33.369 }, 00:08:33.369 { 00:08:33.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.369 "dma_device_type": 2 00:08:33.369 } 00:08:33.369 ], 00:08:33.369 "driver_specific": { 00:08:33.369 "raid": { 00:08:33.369 "uuid": "2479a4a7-19fd-4f2d-9e00-2afddc163e3a", 00:08:33.369 "strip_size_kb": 0, 00:08:33.369 "state": "online", 00:08:33.369 "raid_level": "raid1", 00:08:33.369 "superblock": false, 00:08:33.369 "num_base_bdevs": 2, 00:08:33.369 "num_base_bdevs_discovered": 2, 00:08:33.369 "num_base_bdevs_operational": 2, 00:08:33.369 "base_bdevs_list": [ 00:08:33.369 { 00:08:33.369 "name": "BaseBdev1", 00:08:33.369 "uuid": "31b720a4-b907-41f5-a5ea-59d8650e7a12", 00:08:33.369 "is_configured": true, 00:08:33.369 "data_offset": 0, 00:08:33.369 "data_size": 65536 00:08:33.369 }, 00:08:33.369 { 00:08:33.369 "name": "BaseBdev2", 00:08:33.369 "uuid": "8ce5596d-6319-4563-9668-ab796ef015fb", 00:08:33.369 "is_configured": true, 00:08:33.369 "data_offset": 0, 00:08:33.369 "data_size": 65536 00:08:33.369 } 00:08:33.370 ] 00:08:33.370 } 00:08:33.370 } 00:08:33.370 }' 00:08:33.370 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:33.370 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:33.370 BaseBdev2' 00:08:33.370 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.370 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:33.370 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.370 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:33.370 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.370 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.370 15:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.370 15:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.370 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.370 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.370 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.370 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:33.370 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.370 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.370 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.370 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.629 [2024-11-25 15:35:32.062454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.629 "name": "Existed_Raid", 00:08:33.629 "uuid": "2479a4a7-19fd-4f2d-9e00-2afddc163e3a", 00:08:33.629 "strip_size_kb": 0, 00:08:33.629 "state": "online", 00:08:33.629 "raid_level": "raid1", 00:08:33.629 "superblock": false, 00:08:33.629 "num_base_bdevs": 2, 00:08:33.629 "num_base_bdevs_discovered": 1, 00:08:33.629 "num_base_bdevs_operational": 1, 00:08:33.629 "base_bdevs_list": [ 00:08:33.629 { 00:08:33.629 "name": null, 00:08:33.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.629 "is_configured": false, 00:08:33.629 "data_offset": 0, 00:08:33.629 "data_size": 65536 00:08:33.629 }, 00:08:33.629 { 00:08:33.629 "name": "BaseBdev2", 00:08:33.629 "uuid": "8ce5596d-6319-4563-9668-ab796ef015fb", 00:08:33.629 "is_configured": true, 00:08:33.629 "data_offset": 0, 00:08:33.629 "data_size": 65536 00:08:33.629 } 00:08:33.629 ] 00:08:33.629 }' 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.629 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.197 [2024-11-25 15:35:32.654535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:34.197 [2024-11-25 15:35:32.654678] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:34.197 [2024-11-25 15:35:32.748491] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.197 [2024-11-25 15:35:32.748614] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.197 [2024-11-25 15:35:32.748658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62495 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62495 ']' 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62495 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:34.197 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.198 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62495 00:08:34.198 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.198 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.198 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62495' 00:08:34.198 killing process with pid 62495 00:08:34.198 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62495 00:08:34.198 [2024-11-25 15:35:32.843894] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.198 15:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62495 00:08:34.198 [2024-11-25 15:35:32.859513] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.579 ************************************ 00:08:35.579 END TEST raid_state_function_test 00:08:35.579 ************************************ 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:35.579 00:08:35.579 real 0m4.941s 00:08:35.579 user 0m7.159s 00:08:35.579 sys 0m0.811s 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.579 15:35:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:35.579 15:35:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:35.579 15:35:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.579 15:35:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.579 ************************************ 00:08:35.579 START TEST raid_state_function_test_sb 00:08:35.579 ************************************ 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:35.579 Process raid pid: 62748 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62748 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62748' 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62748 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62748 ']' 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.579 15:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.579 [2024-11-25 15:35:34.080071] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:08:35.579 [2024-11-25 15:35:34.080280] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.579 [2024-11-25 15:35:34.251142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.839 [2024-11-25 15:35:34.359503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.098 [2024-11-25 15:35:34.559055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.098 [2024-11-25 15:35:34.559138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.357 [2024-11-25 15:35:34.898413] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:36.357 [2024-11-25 15:35:34.898512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:36.357 [2024-11-25 15:35:34.898542] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:36.357 [2024-11-25 15:35:34.898565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.357 15:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.357 "name": "Existed_Raid", 00:08:36.357 "uuid": "55a83106-41a1-437a-a229-32e183841151", 00:08:36.357 "strip_size_kb": 0, 00:08:36.357 "state": "configuring", 00:08:36.357 "raid_level": "raid1", 00:08:36.357 "superblock": true, 00:08:36.357 "num_base_bdevs": 2, 00:08:36.357 "num_base_bdevs_discovered": 0, 00:08:36.357 "num_base_bdevs_operational": 2, 00:08:36.357 "base_bdevs_list": [ 00:08:36.357 { 00:08:36.357 "name": "BaseBdev1", 00:08:36.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.357 "is_configured": false, 00:08:36.357 "data_offset": 0, 00:08:36.357 "data_size": 0 00:08:36.357 }, 00:08:36.357 { 00:08:36.357 "name": "BaseBdev2", 00:08:36.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.358 "is_configured": false, 00:08:36.358 "data_offset": 0, 00:08:36.358 "data_size": 0 00:08:36.358 } 00:08:36.358 ] 00:08:36.358 }' 00:08:36.358 15:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.358 15:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.926 [2024-11-25 15:35:35.345571] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:36.926 [2024-11-25 15:35:35.345645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.926 [2024-11-25 15:35:35.353559] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:36.926 [2024-11-25 15:35:35.353634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:36.926 [2024-11-25 15:35:35.353676] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:36.926 [2024-11-25 15:35:35.353701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.926 [2024-11-25 15:35:35.395709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.926 BaseBdev1 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.926 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.926 [ 00:08:36.926 { 00:08:36.926 "name": "BaseBdev1", 00:08:36.926 "aliases": [ 00:08:36.926 "1387cc5a-a811-4088-a02e-0b5963d2d07b" 00:08:36.926 ], 00:08:36.926 "product_name": "Malloc disk", 00:08:36.927 "block_size": 512, 00:08:36.927 "num_blocks": 65536, 00:08:36.927 "uuid": "1387cc5a-a811-4088-a02e-0b5963d2d07b", 00:08:36.927 "assigned_rate_limits": { 00:08:36.927 "rw_ios_per_sec": 0, 00:08:36.927 "rw_mbytes_per_sec": 0, 00:08:36.927 "r_mbytes_per_sec": 0, 00:08:36.927 "w_mbytes_per_sec": 0 00:08:36.927 }, 00:08:36.927 "claimed": true, 00:08:36.927 "claim_type": "exclusive_write", 00:08:36.927 "zoned": false, 00:08:36.927 "supported_io_types": { 00:08:36.927 "read": true, 00:08:36.927 "write": true, 00:08:36.927 "unmap": true, 00:08:36.927 "flush": true, 00:08:36.927 "reset": true, 00:08:36.927 "nvme_admin": false, 00:08:36.927 "nvme_io": false, 00:08:36.927 "nvme_io_md": false, 00:08:36.927 "write_zeroes": true, 00:08:36.927 "zcopy": true, 00:08:36.927 "get_zone_info": false, 00:08:36.927 "zone_management": false, 00:08:36.927 "zone_append": false, 00:08:36.927 "compare": false, 00:08:36.927 "compare_and_write": false, 00:08:36.927 "abort": true, 00:08:36.927 "seek_hole": false, 00:08:36.927 "seek_data": false, 00:08:36.927 "copy": true, 00:08:36.927 "nvme_iov_md": false 00:08:36.927 }, 00:08:36.927 "memory_domains": [ 00:08:36.927 { 00:08:36.927 "dma_device_id": "system", 00:08:36.927 "dma_device_type": 1 00:08:36.927 }, 00:08:36.927 { 00:08:36.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.927 "dma_device_type": 2 00:08:36.927 } 00:08:36.927 ], 00:08:36.927 "driver_specific": {} 00:08:36.927 } 00:08:36.927 ] 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.927 "name": "Existed_Raid", 00:08:36.927 "uuid": "3ecf0a8f-7f80-4f5e-9894-e4fea84bc86c", 00:08:36.927 "strip_size_kb": 0, 00:08:36.927 "state": "configuring", 00:08:36.927 "raid_level": "raid1", 00:08:36.927 "superblock": true, 00:08:36.927 "num_base_bdevs": 2, 00:08:36.927 "num_base_bdevs_discovered": 1, 00:08:36.927 "num_base_bdevs_operational": 2, 00:08:36.927 "base_bdevs_list": [ 00:08:36.927 { 00:08:36.927 "name": "BaseBdev1", 00:08:36.927 "uuid": "1387cc5a-a811-4088-a02e-0b5963d2d07b", 00:08:36.927 "is_configured": true, 00:08:36.927 "data_offset": 2048, 00:08:36.927 "data_size": 63488 00:08:36.927 }, 00:08:36.927 { 00:08:36.927 "name": "BaseBdev2", 00:08:36.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.927 "is_configured": false, 00:08:36.927 "data_offset": 0, 00:08:36.927 "data_size": 0 00:08:36.927 } 00:08:36.927 ] 00:08:36.927 }' 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.927 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.186 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:37.186 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.186 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.446 [2024-11-25 15:35:35.866950] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:37.446 [2024-11-25 15:35:35.867002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.446 [2024-11-25 15:35:35.878966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.446 [2024-11-25 15:35:35.880810] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.446 [2024-11-25 15:35:35.880853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.446 "name": "Existed_Raid", 00:08:37.446 "uuid": "187e2f73-3115-44e5-8aac-102ec846c4b1", 00:08:37.446 "strip_size_kb": 0, 00:08:37.446 "state": "configuring", 00:08:37.446 "raid_level": "raid1", 00:08:37.446 "superblock": true, 00:08:37.446 "num_base_bdevs": 2, 00:08:37.446 "num_base_bdevs_discovered": 1, 00:08:37.446 "num_base_bdevs_operational": 2, 00:08:37.446 "base_bdevs_list": [ 00:08:37.446 { 00:08:37.446 "name": "BaseBdev1", 00:08:37.446 "uuid": "1387cc5a-a811-4088-a02e-0b5963d2d07b", 00:08:37.446 "is_configured": true, 00:08:37.446 "data_offset": 2048, 00:08:37.446 "data_size": 63488 00:08:37.446 }, 00:08:37.446 { 00:08:37.446 "name": "BaseBdev2", 00:08:37.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.446 "is_configured": false, 00:08:37.446 "data_offset": 0, 00:08:37.446 "data_size": 0 00:08:37.446 } 00:08:37.446 ] 00:08:37.446 }' 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.446 15:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.706 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:37.706 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.706 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.706 [2024-11-25 15:35:36.330812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.706 [2024-11-25 15:35:36.331181] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:37.706 [2024-11-25 15:35:36.331239] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:37.706 [2024-11-25 15:35:36.331530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:37.706 BaseBdev2 00:08:37.706 [2024-11-25 15:35:36.331740] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:37.706 [2024-11-25 15:35:36.331799] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:37.706 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.706 [2024-11-25 15:35:36.332031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.706 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:37.706 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:37.706 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:37.706 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:37.706 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:37.706 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:37.706 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:37.706 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.706 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.706 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.706 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:37.706 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.706 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.707 [ 00:08:37.707 { 00:08:37.707 "name": "BaseBdev2", 00:08:37.707 "aliases": [ 00:08:37.707 "59819dc0-8e38-44b2-bfd7-94370f128822" 00:08:37.707 ], 00:08:37.707 "product_name": "Malloc disk", 00:08:37.707 "block_size": 512, 00:08:37.707 "num_blocks": 65536, 00:08:37.707 "uuid": "59819dc0-8e38-44b2-bfd7-94370f128822", 00:08:37.707 "assigned_rate_limits": { 00:08:37.707 "rw_ios_per_sec": 0, 00:08:37.707 "rw_mbytes_per_sec": 0, 00:08:37.707 "r_mbytes_per_sec": 0, 00:08:37.707 "w_mbytes_per_sec": 0 00:08:37.707 }, 00:08:37.707 "claimed": true, 00:08:37.707 "claim_type": "exclusive_write", 00:08:37.707 "zoned": false, 00:08:37.707 "supported_io_types": { 00:08:37.707 "read": true, 00:08:37.707 "write": true, 00:08:37.707 "unmap": true, 00:08:37.707 "flush": true, 00:08:37.707 "reset": true, 00:08:37.707 "nvme_admin": false, 00:08:37.707 "nvme_io": false, 00:08:37.707 "nvme_io_md": false, 00:08:37.707 "write_zeroes": true, 00:08:37.707 "zcopy": true, 00:08:37.707 "get_zone_info": false, 00:08:37.707 "zone_management": false, 00:08:37.707 "zone_append": false, 00:08:37.707 "compare": false, 00:08:37.707 "compare_and_write": false, 00:08:37.707 "abort": true, 00:08:37.707 "seek_hole": false, 00:08:37.707 "seek_data": false, 00:08:37.707 "copy": true, 00:08:37.707 "nvme_iov_md": false 00:08:37.707 }, 00:08:37.707 "memory_domains": [ 00:08:37.707 { 00:08:37.707 "dma_device_id": "system", 00:08:37.707 "dma_device_type": 1 00:08:37.707 }, 00:08:37.707 { 00:08:37.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.707 "dma_device_type": 2 00:08:37.707 } 00:08:37.707 ], 00:08:37.707 "driver_specific": {} 00:08:37.707 } 00:08:37.707 ] 00:08:37.707 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.707 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:37.707 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:37.707 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:37.707 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:37.707 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.707 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.707 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.707 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.707 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.707 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.707 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.707 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.707 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.707 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.707 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.707 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.707 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.707 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.967 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.967 "name": "Existed_Raid", 00:08:37.967 "uuid": "187e2f73-3115-44e5-8aac-102ec846c4b1", 00:08:37.967 "strip_size_kb": 0, 00:08:37.967 "state": "online", 00:08:37.967 "raid_level": "raid1", 00:08:37.967 "superblock": true, 00:08:37.967 "num_base_bdevs": 2, 00:08:37.967 "num_base_bdevs_discovered": 2, 00:08:37.967 "num_base_bdevs_operational": 2, 00:08:37.967 "base_bdevs_list": [ 00:08:37.967 { 00:08:37.967 "name": "BaseBdev1", 00:08:37.967 "uuid": "1387cc5a-a811-4088-a02e-0b5963d2d07b", 00:08:37.967 "is_configured": true, 00:08:37.967 "data_offset": 2048, 00:08:37.967 "data_size": 63488 00:08:37.967 }, 00:08:37.967 { 00:08:37.967 "name": "BaseBdev2", 00:08:37.967 "uuid": "59819dc0-8e38-44b2-bfd7-94370f128822", 00:08:37.967 "is_configured": true, 00:08:37.967 "data_offset": 2048, 00:08:37.967 "data_size": 63488 00:08:37.967 } 00:08:37.967 ] 00:08:37.967 }' 00:08:37.967 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.967 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.227 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:38.227 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:38.227 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:38.227 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:38.227 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:38.227 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:38.227 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:38.227 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:38.227 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.227 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.227 [2024-11-25 15:35:36.794354] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:38.227 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.227 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:38.227 "name": "Existed_Raid", 00:08:38.227 "aliases": [ 00:08:38.227 "187e2f73-3115-44e5-8aac-102ec846c4b1" 00:08:38.227 ], 00:08:38.227 "product_name": "Raid Volume", 00:08:38.227 "block_size": 512, 00:08:38.227 "num_blocks": 63488, 00:08:38.227 "uuid": "187e2f73-3115-44e5-8aac-102ec846c4b1", 00:08:38.227 "assigned_rate_limits": { 00:08:38.227 "rw_ios_per_sec": 0, 00:08:38.227 "rw_mbytes_per_sec": 0, 00:08:38.227 "r_mbytes_per_sec": 0, 00:08:38.227 "w_mbytes_per_sec": 0 00:08:38.227 }, 00:08:38.227 "claimed": false, 00:08:38.227 "zoned": false, 00:08:38.227 "supported_io_types": { 00:08:38.227 "read": true, 00:08:38.227 "write": true, 00:08:38.227 "unmap": false, 00:08:38.227 "flush": false, 00:08:38.227 "reset": true, 00:08:38.227 "nvme_admin": false, 00:08:38.227 "nvme_io": false, 00:08:38.227 "nvme_io_md": false, 00:08:38.227 "write_zeroes": true, 00:08:38.227 "zcopy": false, 00:08:38.227 "get_zone_info": false, 00:08:38.227 "zone_management": false, 00:08:38.227 "zone_append": false, 00:08:38.227 "compare": false, 00:08:38.227 "compare_and_write": false, 00:08:38.227 "abort": false, 00:08:38.227 "seek_hole": false, 00:08:38.227 "seek_data": false, 00:08:38.227 "copy": false, 00:08:38.227 "nvme_iov_md": false 00:08:38.227 }, 00:08:38.227 "memory_domains": [ 00:08:38.227 { 00:08:38.227 "dma_device_id": "system", 00:08:38.227 "dma_device_type": 1 00:08:38.227 }, 00:08:38.227 { 00:08:38.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.227 "dma_device_type": 2 00:08:38.227 }, 00:08:38.227 { 00:08:38.227 "dma_device_id": "system", 00:08:38.227 "dma_device_type": 1 00:08:38.227 }, 00:08:38.227 { 00:08:38.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.227 "dma_device_type": 2 00:08:38.227 } 00:08:38.227 ], 00:08:38.227 "driver_specific": { 00:08:38.227 "raid": { 00:08:38.227 "uuid": "187e2f73-3115-44e5-8aac-102ec846c4b1", 00:08:38.227 "strip_size_kb": 0, 00:08:38.227 "state": "online", 00:08:38.227 "raid_level": "raid1", 00:08:38.227 "superblock": true, 00:08:38.227 "num_base_bdevs": 2, 00:08:38.227 "num_base_bdevs_discovered": 2, 00:08:38.227 "num_base_bdevs_operational": 2, 00:08:38.227 "base_bdevs_list": [ 00:08:38.227 { 00:08:38.227 "name": "BaseBdev1", 00:08:38.227 "uuid": "1387cc5a-a811-4088-a02e-0b5963d2d07b", 00:08:38.227 "is_configured": true, 00:08:38.227 "data_offset": 2048, 00:08:38.227 "data_size": 63488 00:08:38.227 }, 00:08:38.227 { 00:08:38.227 "name": "BaseBdev2", 00:08:38.227 "uuid": "59819dc0-8e38-44b2-bfd7-94370f128822", 00:08:38.227 "is_configured": true, 00:08:38.227 "data_offset": 2048, 00:08:38.227 "data_size": 63488 00:08:38.227 } 00:08:38.227 ] 00:08:38.227 } 00:08:38.227 } 00:08:38.227 }' 00:08:38.227 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:38.227 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:38.227 BaseBdev2' 00:08:38.227 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.488 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:38.488 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.488 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:38.488 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.488 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.488 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.488 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.488 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.488 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.488 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.488 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.488 15:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:38.488 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.488 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.488 15:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.488 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.488 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.488 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:38.488 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.488 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.489 [2024-11-25 15:35:37.009741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.489 "name": "Existed_Raid", 00:08:38.489 "uuid": "187e2f73-3115-44e5-8aac-102ec846c4b1", 00:08:38.489 "strip_size_kb": 0, 00:08:38.489 "state": "online", 00:08:38.489 "raid_level": "raid1", 00:08:38.489 "superblock": true, 00:08:38.489 "num_base_bdevs": 2, 00:08:38.489 "num_base_bdevs_discovered": 1, 00:08:38.489 "num_base_bdevs_operational": 1, 00:08:38.489 "base_bdevs_list": [ 00:08:38.489 { 00:08:38.489 "name": null, 00:08:38.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.489 "is_configured": false, 00:08:38.489 "data_offset": 0, 00:08:38.489 "data_size": 63488 00:08:38.489 }, 00:08:38.489 { 00:08:38.489 "name": "BaseBdev2", 00:08:38.489 "uuid": "59819dc0-8e38-44b2-bfd7-94370f128822", 00:08:38.489 "is_configured": true, 00:08:38.489 "data_offset": 2048, 00:08:38.489 "data_size": 63488 00:08:38.489 } 00:08:38.489 ] 00:08:38.489 }' 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.489 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.059 [2024-11-25 15:35:37.582237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:39.059 [2024-11-25 15:35:37.582338] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.059 [2024-11-25 15:35:37.675157] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.059 [2024-11-25 15:35:37.675280] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:39.059 [2024-11-25 15:35:37.675322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62748 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62748 ']' 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62748 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.059 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62748 00:08:39.319 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.319 killing process with pid 62748 00:08:39.319 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.319 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62748' 00:08:39.319 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62748 00:08:39.319 [2024-11-25 15:35:37.747494] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:39.319 15:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62748 00:08:39.319 [2024-11-25 15:35:37.764327] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:40.256 15:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:40.256 00:08:40.256 real 0m4.846s 00:08:40.256 user 0m7.028s 00:08:40.256 sys 0m0.749s 00:08:40.256 15:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.256 15:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.256 ************************************ 00:08:40.256 END TEST raid_state_function_test_sb 00:08:40.256 ************************************ 00:08:40.256 15:35:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:40.256 15:35:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:40.256 15:35:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.256 15:35:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:40.256 ************************************ 00:08:40.256 START TEST raid_superblock_test 00:08:40.256 ************************************ 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62994 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62994 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62994 ']' 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.256 15:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.516 [2024-11-25 15:35:38.973536] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:08:40.516 [2024-11-25 15:35:38.973664] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62994 ] 00:08:40.516 [2024-11-25 15:35:39.147108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.774 [2024-11-25 15:35:39.258916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.041 [2024-11-25 15:35:39.455996] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.041 [2024-11-25 15:35:39.456068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.313 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.313 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:41.313 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:41.313 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.313 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:41.313 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:41.313 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:41.313 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.313 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.313 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.314 malloc1 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.314 [2024-11-25 15:35:39.823122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:41.314 [2024-11-25 15:35:39.823248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.314 [2024-11-25 15:35:39.823295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:41.314 [2024-11-25 15:35:39.823327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.314 [2024-11-25 15:35:39.825534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.314 [2024-11-25 15:35:39.825615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:41.314 pt1 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.314 malloc2 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.314 [2024-11-25 15:35:39.882154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:41.314 [2024-11-25 15:35:39.882209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.314 [2024-11-25 15:35:39.882230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:41.314 [2024-11-25 15:35:39.882239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.314 [2024-11-25 15:35:39.884402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.314 [2024-11-25 15:35:39.884438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:41.314 pt2 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.314 [2024-11-25 15:35:39.894171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:41.314 [2024-11-25 15:35:39.895965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:41.314 [2024-11-25 15:35:39.896173] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:41.314 [2024-11-25 15:35:39.896197] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:41.314 [2024-11-25 15:35:39.896446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:41.314 [2024-11-25 15:35:39.896617] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:41.314 [2024-11-25 15:35:39.896631] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:41.314 [2024-11-25 15:35:39.896776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.314 "name": "raid_bdev1", 00:08:41.314 "uuid": "0314f88b-9dec-4766-9b93-83652e582e9b", 00:08:41.314 "strip_size_kb": 0, 00:08:41.314 "state": "online", 00:08:41.314 "raid_level": "raid1", 00:08:41.314 "superblock": true, 00:08:41.314 "num_base_bdevs": 2, 00:08:41.314 "num_base_bdevs_discovered": 2, 00:08:41.314 "num_base_bdevs_operational": 2, 00:08:41.314 "base_bdevs_list": [ 00:08:41.314 { 00:08:41.314 "name": "pt1", 00:08:41.314 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:41.314 "is_configured": true, 00:08:41.314 "data_offset": 2048, 00:08:41.314 "data_size": 63488 00:08:41.314 }, 00:08:41.314 { 00:08:41.314 "name": "pt2", 00:08:41.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.314 "is_configured": true, 00:08:41.314 "data_offset": 2048, 00:08:41.314 "data_size": 63488 00:08:41.314 } 00:08:41.314 ] 00:08:41.314 }' 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.314 15:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.883 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:41.883 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:41.883 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.883 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.883 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.883 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.883 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.883 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:41.883 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.883 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.883 [2024-11-25 15:35:40.361665] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.883 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.883 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:41.883 "name": "raid_bdev1", 00:08:41.883 "aliases": [ 00:08:41.883 "0314f88b-9dec-4766-9b93-83652e582e9b" 00:08:41.883 ], 00:08:41.883 "product_name": "Raid Volume", 00:08:41.883 "block_size": 512, 00:08:41.883 "num_blocks": 63488, 00:08:41.883 "uuid": "0314f88b-9dec-4766-9b93-83652e582e9b", 00:08:41.883 "assigned_rate_limits": { 00:08:41.883 "rw_ios_per_sec": 0, 00:08:41.883 "rw_mbytes_per_sec": 0, 00:08:41.883 "r_mbytes_per_sec": 0, 00:08:41.883 "w_mbytes_per_sec": 0 00:08:41.883 }, 00:08:41.883 "claimed": false, 00:08:41.883 "zoned": false, 00:08:41.883 "supported_io_types": { 00:08:41.883 "read": true, 00:08:41.883 "write": true, 00:08:41.883 "unmap": false, 00:08:41.883 "flush": false, 00:08:41.883 "reset": true, 00:08:41.883 "nvme_admin": false, 00:08:41.883 "nvme_io": false, 00:08:41.883 "nvme_io_md": false, 00:08:41.883 "write_zeroes": true, 00:08:41.883 "zcopy": false, 00:08:41.883 "get_zone_info": false, 00:08:41.883 "zone_management": false, 00:08:41.883 "zone_append": false, 00:08:41.883 "compare": false, 00:08:41.883 "compare_and_write": false, 00:08:41.883 "abort": false, 00:08:41.883 "seek_hole": false, 00:08:41.883 "seek_data": false, 00:08:41.883 "copy": false, 00:08:41.883 "nvme_iov_md": false 00:08:41.883 }, 00:08:41.883 "memory_domains": [ 00:08:41.883 { 00:08:41.883 "dma_device_id": "system", 00:08:41.883 "dma_device_type": 1 00:08:41.883 }, 00:08:41.883 { 00:08:41.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.883 "dma_device_type": 2 00:08:41.883 }, 00:08:41.883 { 00:08:41.883 "dma_device_id": "system", 00:08:41.883 "dma_device_type": 1 00:08:41.883 }, 00:08:41.883 { 00:08:41.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.883 "dma_device_type": 2 00:08:41.883 } 00:08:41.883 ], 00:08:41.883 "driver_specific": { 00:08:41.883 "raid": { 00:08:41.883 "uuid": "0314f88b-9dec-4766-9b93-83652e582e9b", 00:08:41.883 "strip_size_kb": 0, 00:08:41.884 "state": "online", 00:08:41.884 "raid_level": "raid1", 00:08:41.884 "superblock": true, 00:08:41.884 "num_base_bdevs": 2, 00:08:41.884 "num_base_bdevs_discovered": 2, 00:08:41.884 "num_base_bdevs_operational": 2, 00:08:41.884 "base_bdevs_list": [ 00:08:41.884 { 00:08:41.884 "name": "pt1", 00:08:41.884 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:41.884 "is_configured": true, 00:08:41.884 "data_offset": 2048, 00:08:41.884 "data_size": 63488 00:08:41.884 }, 00:08:41.884 { 00:08:41.884 "name": "pt2", 00:08:41.884 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.884 "is_configured": true, 00:08:41.884 "data_offset": 2048, 00:08:41.884 "data_size": 63488 00:08:41.884 } 00:08:41.884 ] 00:08:41.884 } 00:08:41.884 } 00:08:41.884 }' 00:08:41.884 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.884 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:41.884 pt2' 00:08:41.884 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.884 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.884 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.884 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:41.884 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.884 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.884 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.884 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.884 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.884 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.884 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.884 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:41.884 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.884 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.884 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.884 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.143 [2024-11-25 15:35:40.589251] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0314f88b-9dec-4766-9b93-83652e582e9b 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0314f88b-9dec-4766-9b93-83652e582e9b ']' 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.143 [2024-11-25 15:35:40.616913] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.143 [2024-11-25 15:35:40.616940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.143 [2024-11-25 15:35:40.617040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.143 [2024-11-25 15:35:40.617099] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.143 [2024-11-25 15:35:40.617115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.143 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.144 [2024-11-25 15:35:40.752756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:42.144 [2024-11-25 15:35:40.754709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:42.144 [2024-11-25 15:35:40.754776] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:42.144 [2024-11-25 15:35:40.754835] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:42.144 [2024-11-25 15:35:40.754851] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.144 [2024-11-25 15:35:40.754863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:42.144 request: 00:08:42.144 { 00:08:42.144 "name": "raid_bdev1", 00:08:42.144 "raid_level": "raid1", 00:08:42.144 "base_bdevs": [ 00:08:42.144 "malloc1", 00:08:42.144 "malloc2" 00:08:42.144 ], 00:08:42.144 "superblock": false, 00:08:42.144 "method": "bdev_raid_create", 00:08:42.144 "req_id": 1 00:08:42.144 } 00:08:42.144 Got JSON-RPC error response 00:08:42.144 response: 00:08:42.144 { 00:08:42.144 "code": -17, 00:08:42.144 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:42.144 } 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.144 [2024-11-25 15:35:40.812612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:42.144 [2024-11-25 15:35:40.812747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.144 [2024-11-25 15:35:40.812782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:42.144 [2024-11-25 15:35:40.812822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.144 [2024-11-25 15:35:40.815108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.144 [2024-11-25 15:35:40.815188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:42.144 [2024-11-25 15:35:40.815316] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:42.144 [2024-11-25 15:35:40.815430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:42.144 pt1 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.144 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.403 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.403 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.403 "name": "raid_bdev1", 00:08:42.403 "uuid": "0314f88b-9dec-4766-9b93-83652e582e9b", 00:08:42.403 "strip_size_kb": 0, 00:08:42.403 "state": "configuring", 00:08:42.403 "raid_level": "raid1", 00:08:42.403 "superblock": true, 00:08:42.403 "num_base_bdevs": 2, 00:08:42.403 "num_base_bdevs_discovered": 1, 00:08:42.403 "num_base_bdevs_operational": 2, 00:08:42.403 "base_bdevs_list": [ 00:08:42.403 { 00:08:42.403 "name": "pt1", 00:08:42.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.403 "is_configured": true, 00:08:42.403 "data_offset": 2048, 00:08:42.403 "data_size": 63488 00:08:42.403 }, 00:08:42.403 { 00:08:42.403 "name": null, 00:08:42.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.403 "is_configured": false, 00:08:42.403 "data_offset": 2048, 00:08:42.403 "data_size": 63488 00:08:42.403 } 00:08:42.403 ] 00:08:42.403 }' 00:08:42.403 15:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.403 15:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.662 [2024-11-25 15:35:41.271839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:42.662 [2024-11-25 15:35:41.271998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.662 [2024-11-25 15:35:41.272062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:42.662 [2024-11-25 15:35:41.272131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.662 [2024-11-25 15:35:41.272643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.662 [2024-11-25 15:35:41.272667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:42.662 [2024-11-25 15:35:41.272751] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:42.662 [2024-11-25 15:35:41.272775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:42.662 [2024-11-25 15:35:41.272881] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:42.662 [2024-11-25 15:35:41.272891] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:42.662 [2024-11-25 15:35:41.273139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:42.662 [2024-11-25 15:35:41.273339] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:42.662 [2024-11-25 15:35:41.273356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:42.662 [2024-11-25 15:35:41.273512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.662 pt2 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.662 "name": "raid_bdev1", 00:08:42.662 "uuid": "0314f88b-9dec-4766-9b93-83652e582e9b", 00:08:42.662 "strip_size_kb": 0, 00:08:42.662 "state": "online", 00:08:42.662 "raid_level": "raid1", 00:08:42.662 "superblock": true, 00:08:42.662 "num_base_bdevs": 2, 00:08:42.662 "num_base_bdevs_discovered": 2, 00:08:42.662 "num_base_bdevs_operational": 2, 00:08:42.662 "base_bdevs_list": [ 00:08:42.662 { 00:08:42.662 "name": "pt1", 00:08:42.662 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.662 "is_configured": true, 00:08:42.662 "data_offset": 2048, 00:08:42.662 "data_size": 63488 00:08:42.662 }, 00:08:42.662 { 00:08:42.662 "name": "pt2", 00:08:42.662 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.662 "is_configured": true, 00:08:42.662 "data_offset": 2048, 00:08:42.662 "data_size": 63488 00:08:42.662 } 00:08:42.662 ] 00:08:42.662 }' 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.662 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.922 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:42.922 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:42.922 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.922 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.922 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.922 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.183 [2024-11-25 15:35:41.611431] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:43.183 "name": "raid_bdev1", 00:08:43.183 "aliases": [ 00:08:43.183 "0314f88b-9dec-4766-9b93-83652e582e9b" 00:08:43.183 ], 00:08:43.183 "product_name": "Raid Volume", 00:08:43.183 "block_size": 512, 00:08:43.183 "num_blocks": 63488, 00:08:43.183 "uuid": "0314f88b-9dec-4766-9b93-83652e582e9b", 00:08:43.183 "assigned_rate_limits": { 00:08:43.183 "rw_ios_per_sec": 0, 00:08:43.183 "rw_mbytes_per_sec": 0, 00:08:43.183 "r_mbytes_per_sec": 0, 00:08:43.183 "w_mbytes_per_sec": 0 00:08:43.183 }, 00:08:43.183 "claimed": false, 00:08:43.183 "zoned": false, 00:08:43.183 "supported_io_types": { 00:08:43.183 "read": true, 00:08:43.183 "write": true, 00:08:43.183 "unmap": false, 00:08:43.183 "flush": false, 00:08:43.183 "reset": true, 00:08:43.183 "nvme_admin": false, 00:08:43.183 "nvme_io": false, 00:08:43.183 "nvme_io_md": false, 00:08:43.183 "write_zeroes": true, 00:08:43.183 "zcopy": false, 00:08:43.183 "get_zone_info": false, 00:08:43.183 "zone_management": false, 00:08:43.183 "zone_append": false, 00:08:43.183 "compare": false, 00:08:43.183 "compare_and_write": false, 00:08:43.183 "abort": false, 00:08:43.183 "seek_hole": false, 00:08:43.183 "seek_data": false, 00:08:43.183 "copy": false, 00:08:43.183 "nvme_iov_md": false 00:08:43.183 }, 00:08:43.183 "memory_domains": [ 00:08:43.183 { 00:08:43.183 "dma_device_id": "system", 00:08:43.183 "dma_device_type": 1 00:08:43.183 }, 00:08:43.183 { 00:08:43.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.183 "dma_device_type": 2 00:08:43.183 }, 00:08:43.183 { 00:08:43.183 "dma_device_id": "system", 00:08:43.183 "dma_device_type": 1 00:08:43.183 }, 00:08:43.183 { 00:08:43.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.183 "dma_device_type": 2 00:08:43.183 } 00:08:43.183 ], 00:08:43.183 "driver_specific": { 00:08:43.183 "raid": { 00:08:43.183 "uuid": "0314f88b-9dec-4766-9b93-83652e582e9b", 00:08:43.183 "strip_size_kb": 0, 00:08:43.183 "state": "online", 00:08:43.183 "raid_level": "raid1", 00:08:43.183 "superblock": true, 00:08:43.183 "num_base_bdevs": 2, 00:08:43.183 "num_base_bdevs_discovered": 2, 00:08:43.183 "num_base_bdevs_operational": 2, 00:08:43.183 "base_bdevs_list": [ 00:08:43.183 { 00:08:43.183 "name": "pt1", 00:08:43.183 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.183 "is_configured": true, 00:08:43.183 "data_offset": 2048, 00:08:43.183 "data_size": 63488 00:08:43.183 }, 00:08:43.183 { 00:08:43.183 "name": "pt2", 00:08:43.183 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.183 "is_configured": true, 00:08:43.183 "data_offset": 2048, 00:08:43.183 "data_size": 63488 00:08:43.183 } 00:08:43.183 ] 00:08:43.183 } 00:08:43.183 } 00:08:43.183 }' 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:43.183 pt2' 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.183 [2024-11-25 15:35:41.819073] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0314f88b-9dec-4766-9b93-83652e582e9b '!=' 0314f88b-9dec-4766-9b93-83652e582e9b ']' 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:43.183 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:43.184 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:43.184 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:43.184 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.184 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.184 [2024-11-25 15:35:41.846823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:43.184 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.184 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:43.184 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.184 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.184 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.184 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.184 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:43.184 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.184 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.184 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.184 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.184 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.184 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.184 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.184 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.444 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.444 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.444 "name": "raid_bdev1", 00:08:43.444 "uuid": "0314f88b-9dec-4766-9b93-83652e582e9b", 00:08:43.444 "strip_size_kb": 0, 00:08:43.444 "state": "online", 00:08:43.444 "raid_level": "raid1", 00:08:43.444 "superblock": true, 00:08:43.444 "num_base_bdevs": 2, 00:08:43.444 "num_base_bdevs_discovered": 1, 00:08:43.444 "num_base_bdevs_operational": 1, 00:08:43.444 "base_bdevs_list": [ 00:08:43.444 { 00:08:43.444 "name": null, 00:08:43.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.444 "is_configured": false, 00:08:43.444 "data_offset": 0, 00:08:43.444 "data_size": 63488 00:08:43.444 }, 00:08:43.444 { 00:08:43.444 "name": "pt2", 00:08:43.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.444 "is_configured": true, 00:08:43.444 "data_offset": 2048, 00:08:43.444 "data_size": 63488 00:08:43.444 } 00:08:43.444 ] 00:08:43.444 }' 00:08:43.444 15:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.444 15:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.704 [2024-11-25 15:35:42.282142] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:43.704 [2024-11-25 15:35:42.282213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:43.704 [2024-11-25 15:35:42.282308] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.704 [2024-11-25 15:35:42.282380] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.704 [2024-11-25 15:35:42.282448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.704 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.704 [2024-11-25 15:35:42.346011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:43.704 [2024-11-25 15:35:42.346078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.704 [2024-11-25 15:35:42.346096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:43.704 [2024-11-25 15:35:42.346106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.704 [2024-11-25 15:35:42.348203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.704 [2024-11-25 15:35:42.348284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:43.704 [2024-11-25 15:35:42.348377] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:43.704 [2024-11-25 15:35:42.348424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:43.704 [2024-11-25 15:35:42.348527] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:43.705 [2024-11-25 15:35:42.348539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:43.705 [2024-11-25 15:35:42.348750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:43.705 [2024-11-25 15:35:42.348891] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:43.705 [2024-11-25 15:35:42.348900] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:43.705 [2024-11-25 15:35:42.349044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.705 pt2 00:08:43.705 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.705 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:43.705 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.705 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.705 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.705 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.705 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:43.705 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.705 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.705 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.705 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.705 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.705 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.705 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.705 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.705 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.965 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.965 "name": "raid_bdev1", 00:08:43.965 "uuid": "0314f88b-9dec-4766-9b93-83652e582e9b", 00:08:43.965 "strip_size_kb": 0, 00:08:43.965 "state": "online", 00:08:43.965 "raid_level": "raid1", 00:08:43.965 "superblock": true, 00:08:43.965 "num_base_bdevs": 2, 00:08:43.965 "num_base_bdevs_discovered": 1, 00:08:43.965 "num_base_bdevs_operational": 1, 00:08:43.965 "base_bdevs_list": [ 00:08:43.965 { 00:08:43.965 "name": null, 00:08:43.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.965 "is_configured": false, 00:08:43.965 "data_offset": 2048, 00:08:43.965 "data_size": 63488 00:08:43.965 }, 00:08:43.965 { 00:08:43.965 "name": "pt2", 00:08:43.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.965 "is_configured": true, 00:08:43.965 "data_offset": 2048, 00:08:43.965 "data_size": 63488 00:08:43.965 } 00:08:43.965 ] 00:08:43.965 }' 00:08:43.965 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.965 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.225 [2024-11-25 15:35:42.781209] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.225 [2024-11-25 15:35:42.781276] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.225 [2024-11-25 15:35:42.781355] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.225 [2024-11-25 15:35:42.781423] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.225 [2024-11-25 15:35:42.781509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.225 [2024-11-25 15:35:42.817170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:44.225 [2024-11-25 15:35:42.817220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.225 [2024-11-25 15:35:42.817237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:44.225 [2024-11-25 15:35:42.817245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.225 [2024-11-25 15:35:42.819375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.225 [2024-11-25 15:35:42.819467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:44.225 [2024-11-25 15:35:42.819566] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:44.225 [2024-11-25 15:35:42.819621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:44.225 [2024-11-25 15:35:42.819752] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:44.225 [2024-11-25 15:35:42.819762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.225 [2024-11-25 15:35:42.819777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:44.225 [2024-11-25 15:35:42.819835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:44.225 [2024-11-25 15:35:42.819907] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:44.225 [2024-11-25 15:35:42.819915] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:44.225 [2024-11-25 15:35:42.820171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:44.225 [2024-11-25 15:35:42.820316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:44.225 [2024-11-25 15:35:42.820328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:44.225 [2024-11-25 15:35:42.820488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.225 pt1 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.225 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.225 "name": "raid_bdev1", 00:08:44.225 "uuid": "0314f88b-9dec-4766-9b93-83652e582e9b", 00:08:44.225 "strip_size_kb": 0, 00:08:44.225 "state": "online", 00:08:44.225 "raid_level": "raid1", 00:08:44.225 "superblock": true, 00:08:44.226 "num_base_bdevs": 2, 00:08:44.226 "num_base_bdevs_discovered": 1, 00:08:44.226 "num_base_bdevs_operational": 1, 00:08:44.226 "base_bdevs_list": [ 00:08:44.226 { 00:08:44.226 "name": null, 00:08:44.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.226 "is_configured": false, 00:08:44.226 "data_offset": 2048, 00:08:44.226 "data_size": 63488 00:08:44.226 }, 00:08:44.226 { 00:08:44.226 "name": "pt2", 00:08:44.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.226 "is_configured": true, 00:08:44.226 "data_offset": 2048, 00:08:44.226 "data_size": 63488 00:08:44.226 } 00:08:44.226 ] 00:08:44.226 }' 00:08:44.226 15:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.226 15:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.795 15:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:44.795 15:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:44.795 15:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.795 15:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.795 15:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.795 15:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:44.795 15:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.795 15:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.795 15:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.796 15:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:44.796 [2024-11-25 15:35:43.240658] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.796 15:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.796 15:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 0314f88b-9dec-4766-9b93-83652e582e9b '!=' 0314f88b-9dec-4766-9b93-83652e582e9b ']' 00:08:44.796 15:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62994 00:08:44.796 15:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62994 ']' 00:08:44.796 15:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62994 00:08:44.796 15:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:44.796 15:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.796 15:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62994 00:08:44.796 15:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.796 15:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.796 15:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62994' 00:08:44.796 killing process with pid 62994 00:08:44.796 15:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62994 00:08:44.796 [2024-11-25 15:35:43.292546] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.796 [2024-11-25 15:35:43.292686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.796 15:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62994 00:08:44.796 [2024-11-25 15:35:43.292774] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.796 [2024-11-25 15:35:43.292796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:45.056 [2024-11-25 15:35:43.496039] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:45.995 15:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:45.995 00:08:45.995 real 0m5.682s 00:08:45.995 user 0m8.571s 00:08:45.995 sys 0m0.963s 00:08:45.995 ************************************ 00:08:45.995 END TEST raid_superblock_test 00:08:45.995 ************************************ 00:08:45.995 15:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.995 15:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.995 15:35:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:45.995 15:35:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:45.995 15:35:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.995 15:35:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:45.995 ************************************ 00:08:45.995 START TEST raid_read_error_test 00:08:45.995 ************************************ 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WRlgVWAAZf 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63319 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63319 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63319 ']' 00:08:45.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.995 15:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.255 [2024-11-25 15:35:44.742062] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:08:46.255 [2024-11-25 15:35:44.742633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63319 ] 00:08:46.255 [2024-11-25 15:35:44.914746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.516 [2024-11-25 15:35:45.026882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.776 [2024-11-25 15:35:45.222879] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.776 [2024-11-25 15:35:45.222921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.037 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.037 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:47.037 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:47.037 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:47.037 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.037 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.037 BaseBdev1_malloc 00:08:47.037 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.037 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:47.037 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.037 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.037 true 00:08:47.037 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.037 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:47.037 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.038 [2024-11-25 15:35:45.609274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:47.038 [2024-11-25 15:35:45.609329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.038 [2024-11-25 15:35:45.609376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:47.038 [2024-11-25 15:35:45.609387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.038 [2024-11-25 15:35:45.611430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.038 [2024-11-25 15:35:45.611551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:47.038 BaseBdev1 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.038 BaseBdev2_malloc 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.038 true 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.038 [2024-11-25 15:35:45.673317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:47.038 [2024-11-25 15:35:45.673375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.038 [2024-11-25 15:35:45.673394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:47.038 [2024-11-25 15:35:45.673404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.038 [2024-11-25 15:35:45.675481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.038 [2024-11-25 15:35:45.675524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:47.038 BaseBdev2 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.038 [2024-11-25 15:35:45.685336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.038 [2024-11-25 15:35:45.687124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:47.038 [2024-11-25 15:35:45.687316] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:47.038 [2024-11-25 15:35:45.687331] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:47.038 [2024-11-25 15:35:45.687558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:47.038 [2024-11-25 15:35:45.687746] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:47.038 [2024-11-25 15:35:45.687756] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:47.038 [2024-11-25 15:35:45.687890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.038 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.298 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.298 "name": "raid_bdev1", 00:08:47.298 "uuid": "8077ff0b-0feb-4a51-81b3-7ba0d2eace6e", 00:08:47.298 "strip_size_kb": 0, 00:08:47.298 "state": "online", 00:08:47.298 "raid_level": "raid1", 00:08:47.298 "superblock": true, 00:08:47.298 "num_base_bdevs": 2, 00:08:47.298 "num_base_bdevs_discovered": 2, 00:08:47.298 "num_base_bdevs_operational": 2, 00:08:47.298 "base_bdevs_list": [ 00:08:47.298 { 00:08:47.298 "name": "BaseBdev1", 00:08:47.298 "uuid": "d4db052b-4d26-59c9-ab15-2c0b4998de7c", 00:08:47.298 "is_configured": true, 00:08:47.298 "data_offset": 2048, 00:08:47.298 "data_size": 63488 00:08:47.298 }, 00:08:47.298 { 00:08:47.298 "name": "BaseBdev2", 00:08:47.298 "uuid": "fdbc608d-7d92-5765-8714-c458fdd7cb41", 00:08:47.298 "is_configured": true, 00:08:47.298 "data_offset": 2048, 00:08:47.298 "data_size": 63488 00:08:47.298 } 00:08:47.298 ] 00:08:47.298 }' 00:08:47.298 15:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.298 15:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.558 15:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:47.558 15:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:47.818 [2024-11-25 15:35:46.241691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.757 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.758 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.758 "name": "raid_bdev1", 00:08:48.758 "uuid": "8077ff0b-0feb-4a51-81b3-7ba0d2eace6e", 00:08:48.758 "strip_size_kb": 0, 00:08:48.758 "state": "online", 00:08:48.758 "raid_level": "raid1", 00:08:48.758 "superblock": true, 00:08:48.758 "num_base_bdevs": 2, 00:08:48.758 "num_base_bdevs_discovered": 2, 00:08:48.758 "num_base_bdevs_operational": 2, 00:08:48.758 "base_bdevs_list": [ 00:08:48.758 { 00:08:48.758 "name": "BaseBdev1", 00:08:48.758 "uuid": "d4db052b-4d26-59c9-ab15-2c0b4998de7c", 00:08:48.758 "is_configured": true, 00:08:48.758 "data_offset": 2048, 00:08:48.758 "data_size": 63488 00:08:48.758 }, 00:08:48.758 { 00:08:48.758 "name": "BaseBdev2", 00:08:48.758 "uuid": "fdbc608d-7d92-5765-8714-c458fdd7cb41", 00:08:48.758 "is_configured": true, 00:08:48.758 "data_offset": 2048, 00:08:48.758 "data_size": 63488 00:08:48.758 } 00:08:48.758 ] 00:08:48.758 }' 00:08:48.758 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.758 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.018 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:49.018 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.018 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.018 [2024-11-25 15:35:47.615642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:49.018 [2024-11-25 15:35:47.615762] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.018 [2024-11-25 15:35:47.618384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.018 [2024-11-25 15:35:47.618477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.018 [2024-11-25 15:35:47.618582] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.018 [2024-11-25 15:35:47.618661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:49.018 { 00:08:49.018 "results": [ 00:08:49.018 { 00:08:49.018 "job": "raid_bdev1", 00:08:49.018 "core_mask": "0x1", 00:08:49.018 "workload": "randrw", 00:08:49.018 "percentage": 50, 00:08:49.018 "status": "finished", 00:08:49.018 "queue_depth": 1, 00:08:49.018 "io_size": 131072, 00:08:49.018 "runtime": 1.374799, 00:08:49.018 "iops": 18685.640591824696, 00:08:49.018 "mibps": 2335.705073978087, 00:08:49.018 "io_failed": 0, 00:08:49.018 "io_timeout": 0, 00:08:49.018 "avg_latency_us": 51.02437258840674, 00:08:49.018 "min_latency_us": 21.463755458515283, 00:08:49.018 "max_latency_us": 1423.7624454148472 00:08:49.018 } 00:08:49.018 ], 00:08:49.018 "core_count": 1 00:08:49.018 } 00:08:49.018 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.018 15:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63319 00:08:49.018 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63319 ']' 00:08:49.018 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63319 00:08:49.018 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:49.018 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.018 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63319 00:08:49.018 killing process with pid 63319 00:08:49.018 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:49.018 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:49.018 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63319' 00:08:49.018 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63319 00:08:49.018 [2024-11-25 15:35:47.653684] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:49.018 15:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63319 00:08:49.278 [2024-11-25 15:35:47.786110] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.659 15:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WRlgVWAAZf 00:08:50.659 15:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:50.659 15:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:50.659 15:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:50.659 15:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:50.659 15:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:50.659 15:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:50.659 15:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:50.659 00:08:50.659 real 0m4.286s 00:08:50.659 user 0m5.178s 00:08:50.659 sys 0m0.503s 00:08:50.659 15:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.659 15:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.659 ************************************ 00:08:50.659 END TEST raid_read_error_test 00:08:50.659 ************************************ 00:08:50.659 15:35:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:50.659 15:35:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:50.659 15:35:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.659 15:35:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.659 ************************************ 00:08:50.659 START TEST raid_write_error_test 00:08:50.659 ************************************ 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:50.659 15:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:50.659 15:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Ov99PNdw0S 00:08:50.659 15:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63459 00:08:50.659 15:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63459 00:08:50.659 15:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:50.659 15:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63459 ']' 00:08:50.659 15:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.659 15:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.659 15:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.659 15:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.659 15:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.660 [2024-11-25 15:35:49.091651] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:08:50.660 [2024-11-25 15:35:49.091855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63459 ] 00:08:50.660 [2024-11-25 15:35:49.249265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.919 [2024-11-25 15:35:49.361561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.919 [2024-11-25 15:35:49.561881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.919 [2024-11-25 15:35:49.561967] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.488 15:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.488 15:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:51.488 15:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:51.488 15:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:51.488 15:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.488 15:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.488 BaseBdev1_malloc 00:08:51.488 15:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.488 15:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:51.488 15:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.488 15:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.488 true 00:08:51.488 15:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.488 15:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:51.488 15:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.488 15:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.488 [2024-11-25 15:35:49.977430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:51.488 [2024-11-25 15:35:49.977542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.488 [2024-11-25 15:35:49.977566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:51.489 [2024-11-25 15:35:49.977577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.489 [2024-11-25 15:35:49.979691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.489 [2024-11-25 15:35:49.979732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:51.489 BaseBdev1 00:08:51.489 15:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.489 15:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:51.489 15:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:51.489 15:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.489 15:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.489 BaseBdev2_malloc 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.489 true 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.489 [2024-11-25 15:35:50.044339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:51.489 [2024-11-25 15:35:50.044394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.489 [2024-11-25 15:35:50.044410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:51.489 [2024-11-25 15:35:50.044420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.489 [2024-11-25 15:35:50.046424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.489 [2024-11-25 15:35:50.046462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:51.489 BaseBdev2 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.489 [2024-11-25 15:35:50.056379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:51.489 [2024-11-25 15:35:50.058144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.489 [2024-11-25 15:35:50.058332] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:51.489 [2024-11-25 15:35:50.058347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:51.489 [2024-11-25 15:35:50.058581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:51.489 [2024-11-25 15:35:50.058758] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:51.489 [2024-11-25 15:35:50.058768] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:51.489 [2024-11-25 15:35:50.058910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.489 "name": "raid_bdev1", 00:08:51.489 "uuid": "f4f877bf-d614-4498-bb5f-a05e7e38fd68", 00:08:51.489 "strip_size_kb": 0, 00:08:51.489 "state": "online", 00:08:51.489 "raid_level": "raid1", 00:08:51.489 "superblock": true, 00:08:51.489 "num_base_bdevs": 2, 00:08:51.489 "num_base_bdevs_discovered": 2, 00:08:51.489 "num_base_bdevs_operational": 2, 00:08:51.489 "base_bdevs_list": [ 00:08:51.489 { 00:08:51.489 "name": "BaseBdev1", 00:08:51.489 "uuid": "9457cb8f-d5c6-5232-a1eb-94020c1b0669", 00:08:51.489 "is_configured": true, 00:08:51.489 "data_offset": 2048, 00:08:51.489 "data_size": 63488 00:08:51.489 }, 00:08:51.489 { 00:08:51.489 "name": "BaseBdev2", 00:08:51.489 "uuid": "4abf4a2e-30f2-5dd4-b82e-9aa68f98a357", 00:08:51.489 "is_configured": true, 00:08:51.489 "data_offset": 2048, 00:08:51.489 "data_size": 63488 00:08:51.489 } 00:08:51.489 ] 00:08:51.489 }' 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.489 15:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.058 15:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:52.058 15:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:52.058 [2024-11-25 15:35:50.552877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.994 [2024-11-25 15:35:51.480884] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:52.994 [2024-11-25 15:35:51.481076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:52.994 [2024-11-25 15:35:51.481325] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.994 "name": "raid_bdev1", 00:08:52.994 "uuid": "f4f877bf-d614-4498-bb5f-a05e7e38fd68", 00:08:52.994 "strip_size_kb": 0, 00:08:52.994 "state": "online", 00:08:52.994 "raid_level": "raid1", 00:08:52.994 "superblock": true, 00:08:52.994 "num_base_bdevs": 2, 00:08:52.994 "num_base_bdevs_discovered": 1, 00:08:52.994 "num_base_bdevs_operational": 1, 00:08:52.994 "base_bdevs_list": [ 00:08:52.994 { 00:08:52.994 "name": null, 00:08:52.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.994 "is_configured": false, 00:08:52.994 "data_offset": 0, 00:08:52.994 "data_size": 63488 00:08:52.994 }, 00:08:52.994 { 00:08:52.994 "name": "BaseBdev2", 00:08:52.994 "uuid": "4abf4a2e-30f2-5dd4-b82e-9aa68f98a357", 00:08:52.994 "is_configured": true, 00:08:52.994 "data_offset": 2048, 00:08:52.994 "data_size": 63488 00:08:52.994 } 00:08:52.994 ] 00:08:52.994 }' 00:08:52.994 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.995 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.255 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:53.255 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.255 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.255 [2024-11-25 15:35:51.893797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:53.255 [2024-11-25 15:35:51.893831] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.255 [2024-11-25 15:35:51.896338] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.255 [2024-11-25 15:35:51.896377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.255 [2024-11-25 15:35:51.896432] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.255 [2024-11-25 15:35:51.896444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:53.255 { 00:08:53.255 "results": [ 00:08:53.255 { 00:08:53.255 "job": "raid_bdev1", 00:08:53.255 "core_mask": "0x1", 00:08:53.255 "workload": "randrw", 00:08:53.255 "percentage": 50, 00:08:53.255 "status": "finished", 00:08:53.255 "queue_depth": 1, 00:08:53.255 "io_size": 131072, 00:08:53.255 "runtime": 1.341488, 00:08:53.255 "iops": 21542.49609388977, 00:08:53.255 "mibps": 2692.8120117362214, 00:08:53.255 "io_failed": 0, 00:08:53.255 "io_timeout": 0, 00:08:53.255 "avg_latency_us": 43.88104440234631, 00:08:53.255 "min_latency_us": 21.910917030567685, 00:08:53.255 "max_latency_us": 1345.0620087336245 00:08:53.255 } 00:08:53.255 ], 00:08:53.255 "core_count": 1 00:08:53.255 } 00:08:53.255 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.255 15:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63459 00:08:53.255 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63459 ']' 00:08:53.255 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63459 00:08:53.255 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:53.255 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.255 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63459 00:08:53.515 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:53.515 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:53.515 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63459' 00:08:53.515 killing process with pid 63459 00:08:53.515 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63459 00:08:53.515 [2024-11-25 15:35:51.941779] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:53.515 15:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63459 00:08:53.515 [2024-11-25 15:35:52.070517] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:54.897 15:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Ov99PNdw0S 00:08:54.897 15:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:54.897 15:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:54.897 15:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:54.897 15:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:54.897 15:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:54.897 15:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:54.897 ************************************ 00:08:54.897 END TEST raid_write_error_test 00:08:54.897 ************************************ 00:08:54.897 15:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:54.897 00:08:54.897 real 0m4.207s 00:08:54.897 user 0m5.017s 00:08:54.897 sys 0m0.497s 00:08:54.897 15:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.897 15:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.897 15:35:53 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:54.897 15:35:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:54.897 15:35:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:54.897 15:35:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:54.897 15:35:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.897 15:35:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:54.897 ************************************ 00:08:54.897 START TEST raid_state_function_test 00:08:54.897 ************************************ 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63597 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63597' 00:08:54.897 Process raid pid: 63597 00:08:54.897 15:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63597 00:08:54.898 15:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63597 ']' 00:08:54.898 15:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.898 15:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.898 15:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.898 15:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.898 15:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.898 [2024-11-25 15:35:53.364846] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:08:54.898 [2024-11-25 15:35:53.364959] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.898 [2024-11-25 15:35:53.520645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.157 [2024-11-25 15:35:53.632564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.157 [2024-11-25 15:35:53.834530] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.157 [2024-11-25 15:35:53.834576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.728 [2024-11-25 15:35:54.193446] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.728 [2024-11-25 15:35:54.193506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.728 [2024-11-25 15:35:54.193517] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.728 [2024-11-25 15:35:54.193527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.728 [2024-11-25 15:35:54.193533] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.728 [2024-11-25 15:35:54.193542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.728 "name": "Existed_Raid", 00:08:55.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.728 "strip_size_kb": 64, 00:08:55.728 "state": "configuring", 00:08:55.728 "raid_level": "raid0", 00:08:55.728 "superblock": false, 00:08:55.728 "num_base_bdevs": 3, 00:08:55.728 "num_base_bdevs_discovered": 0, 00:08:55.728 "num_base_bdevs_operational": 3, 00:08:55.728 "base_bdevs_list": [ 00:08:55.728 { 00:08:55.728 "name": "BaseBdev1", 00:08:55.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.728 "is_configured": false, 00:08:55.728 "data_offset": 0, 00:08:55.728 "data_size": 0 00:08:55.728 }, 00:08:55.728 { 00:08:55.728 "name": "BaseBdev2", 00:08:55.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.728 "is_configured": false, 00:08:55.728 "data_offset": 0, 00:08:55.728 "data_size": 0 00:08:55.728 }, 00:08:55.728 { 00:08:55.728 "name": "BaseBdev3", 00:08:55.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.728 "is_configured": false, 00:08:55.728 "data_offset": 0, 00:08:55.728 "data_size": 0 00:08:55.728 } 00:08:55.728 ] 00:08:55.728 }' 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.728 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.987 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.987 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.988 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.248 [2024-11-25 15:35:54.668596] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:56.248 [2024-11-25 15:35:54.668699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.248 [2024-11-25 15:35:54.680546] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.248 [2024-11-25 15:35:54.680643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.248 [2024-11-25 15:35:54.680672] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.248 [2024-11-25 15:35:54.680694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.248 [2024-11-25 15:35:54.680712] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:56.248 [2024-11-25 15:35:54.680735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.248 [2024-11-25 15:35:54.725849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.248 BaseBdev1 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.248 [ 00:08:56.248 { 00:08:56.248 "name": "BaseBdev1", 00:08:56.248 "aliases": [ 00:08:56.248 "e9139c06-8158-4954-9e12-fe277aa07af5" 00:08:56.248 ], 00:08:56.248 "product_name": "Malloc disk", 00:08:56.248 "block_size": 512, 00:08:56.248 "num_blocks": 65536, 00:08:56.248 "uuid": "e9139c06-8158-4954-9e12-fe277aa07af5", 00:08:56.248 "assigned_rate_limits": { 00:08:56.248 "rw_ios_per_sec": 0, 00:08:56.248 "rw_mbytes_per_sec": 0, 00:08:56.248 "r_mbytes_per_sec": 0, 00:08:56.248 "w_mbytes_per_sec": 0 00:08:56.248 }, 00:08:56.248 "claimed": true, 00:08:56.248 "claim_type": "exclusive_write", 00:08:56.248 "zoned": false, 00:08:56.248 "supported_io_types": { 00:08:56.248 "read": true, 00:08:56.248 "write": true, 00:08:56.248 "unmap": true, 00:08:56.248 "flush": true, 00:08:56.248 "reset": true, 00:08:56.248 "nvme_admin": false, 00:08:56.248 "nvme_io": false, 00:08:56.248 "nvme_io_md": false, 00:08:56.248 "write_zeroes": true, 00:08:56.248 "zcopy": true, 00:08:56.248 "get_zone_info": false, 00:08:56.248 "zone_management": false, 00:08:56.248 "zone_append": false, 00:08:56.248 "compare": false, 00:08:56.248 "compare_and_write": false, 00:08:56.248 "abort": true, 00:08:56.248 "seek_hole": false, 00:08:56.248 "seek_data": false, 00:08:56.248 "copy": true, 00:08:56.248 "nvme_iov_md": false 00:08:56.248 }, 00:08:56.248 "memory_domains": [ 00:08:56.248 { 00:08:56.248 "dma_device_id": "system", 00:08:56.248 "dma_device_type": 1 00:08:56.248 }, 00:08:56.248 { 00:08:56.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.248 "dma_device_type": 2 00:08:56.248 } 00:08:56.248 ], 00:08:56.248 "driver_specific": {} 00:08:56.248 } 00:08:56.248 ] 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.248 "name": "Existed_Raid", 00:08:56.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.248 "strip_size_kb": 64, 00:08:56.248 "state": "configuring", 00:08:56.248 "raid_level": "raid0", 00:08:56.248 "superblock": false, 00:08:56.248 "num_base_bdevs": 3, 00:08:56.248 "num_base_bdevs_discovered": 1, 00:08:56.248 "num_base_bdevs_operational": 3, 00:08:56.248 "base_bdevs_list": [ 00:08:56.248 { 00:08:56.248 "name": "BaseBdev1", 00:08:56.248 "uuid": "e9139c06-8158-4954-9e12-fe277aa07af5", 00:08:56.248 "is_configured": true, 00:08:56.248 "data_offset": 0, 00:08:56.248 "data_size": 65536 00:08:56.248 }, 00:08:56.248 { 00:08:56.248 "name": "BaseBdev2", 00:08:56.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.248 "is_configured": false, 00:08:56.248 "data_offset": 0, 00:08:56.248 "data_size": 0 00:08:56.248 }, 00:08:56.248 { 00:08:56.248 "name": "BaseBdev3", 00:08:56.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.248 "is_configured": false, 00:08:56.248 "data_offset": 0, 00:08:56.248 "data_size": 0 00:08:56.248 } 00:08:56.248 ] 00:08:56.248 }' 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.248 15:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.817 [2024-11-25 15:35:55.233025] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:56.817 [2024-11-25 15:35:55.233136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.817 [2024-11-25 15:35:55.245064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.817 [2024-11-25 15:35:55.246813] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.817 [2024-11-25 15:35:55.246861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.817 [2024-11-25 15:35:55.246871] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:56.817 [2024-11-25 15:35:55.246879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.817 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.818 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.818 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.818 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.818 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.818 "name": "Existed_Raid", 00:08:56.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.818 "strip_size_kb": 64, 00:08:56.818 "state": "configuring", 00:08:56.818 "raid_level": "raid0", 00:08:56.818 "superblock": false, 00:08:56.818 "num_base_bdevs": 3, 00:08:56.818 "num_base_bdevs_discovered": 1, 00:08:56.818 "num_base_bdevs_operational": 3, 00:08:56.818 "base_bdevs_list": [ 00:08:56.818 { 00:08:56.818 "name": "BaseBdev1", 00:08:56.818 "uuid": "e9139c06-8158-4954-9e12-fe277aa07af5", 00:08:56.818 "is_configured": true, 00:08:56.818 "data_offset": 0, 00:08:56.818 "data_size": 65536 00:08:56.818 }, 00:08:56.818 { 00:08:56.818 "name": "BaseBdev2", 00:08:56.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.818 "is_configured": false, 00:08:56.818 "data_offset": 0, 00:08:56.818 "data_size": 0 00:08:56.818 }, 00:08:56.818 { 00:08:56.818 "name": "BaseBdev3", 00:08:56.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.818 "is_configured": false, 00:08:56.818 "data_offset": 0, 00:08:56.818 "data_size": 0 00:08:56.818 } 00:08:56.818 ] 00:08:56.818 }' 00:08:56.818 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.818 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.077 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:57.077 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.077 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.077 [2024-11-25 15:35:55.728858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:57.077 BaseBdev2 00:08:57.077 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.077 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:57.077 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:57.077 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.077 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.077 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.077 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.077 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.077 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.077 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.077 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.077 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:57.077 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.077 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.077 [ 00:08:57.077 { 00:08:57.077 "name": "BaseBdev2", 00:08:57.077 "aliases": [ 00:08:57.077 "1f417e6e-1a5a-4d82-bda1-bfe43272729d" 00:08:57.077 ], 00:08:57.077 "product_name": "Malloc disk", 00:08:57.077 "block_size": 512, 00:08:57.077 "num_blocks": 65536, 00:08:57.077 "uuid": "1f417e6e-1a5a-4d82-bda1-bfe43272729d", 00:08:57.077 "assigned_rate_limits": { 00:08:57.077 "rw_ios_per_sec": 0, 00:08:57.077 "rw_mbytes_per_sec": 0, 00:08:57.077 "r_mbytes_per_sec": 0, 00:08:57.077 "w_mbytes_per_sec": 0 00:08:57.077 }, 00:08:57.077 "claimed": true, 00:08:57.337 "claim_type": "exclusive_write", 00:08:57.337 "zoned": false, 00:08:57.337 "supported_io_types": { 00:08:57.337 "read": true, 00:08:57.337 "write": true, 00:08:57.337 "unmap": true, 00:08:57.337 "flush": true, 00:08:57.337 "reset": true, 00:08:57.337 "nvme_admin": false, 00:08:57.337 "nvme_io": false, 00:08:57.337 "nvme_io_md": false, 00:08:57.337 "write_zeroes": true, 00:08:57.337 "zcopy": true, 00:08:57.337 "get_zone_info": false, 00:08:57.337 "zone_management": false, 00:08:57.337 "zone_append": false, 00:08:57.337 "compare": false, 00:08:57.337 "compare_and_write": false, 00:08:57.337 "abort": true, 00:08:57.337 "seek_hole": false, 00:08:57.337 "seek_data": false, 00:08:57.337 "copy": true, 00:08:57.337 "nvme_iov_md": false 00:08:57.337 }, 00:08:57.337 "memory_domains": [ 00:08:57.337 { 00:08:57.337 "dma_device_id": "system", 00:08:57.337 "dma_device_type": 1 00:08:57.337 }, 00:08:57.337 { 00:08:57.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.337 "dma_device_type": 2 00:08:57.337 } 00:08:57.337 ], 00:08:57.337 "driver_specific": {} 00:08:57.337 } 00:08:57.337 ] 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.337 "name": "Existed_Raid", 00:08:57.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.337 "strip_size_kb": 64, 00:08:57.337 "state": "configuring", 00:08:57.337 "raid_level": "raid0", 00:08:57.337 "superblock": false, 00:08:57.337 "num_base_bdevs": 3, 00:08:57.337 "num_base_bdevs_discovered": 2, 00:08:57.337 "num_base_bdevs_operational": 3, 00:08:57.337 "base_bdevs_list": [ 00:08:57.337 { 00:08:57.337 "name": "BaseBdev1", 00:08:57.337 "uuid": "e9139c06-8158-4954-9e12-fe277aa07af5", 00:08:57.337 "is_configured": true, 00:08:57.337 "data_offset": 0, 00:08:57.337 "data_size": 65536 00:08:57.337 }, 00:08:57.337 { 00:08:57.337 "name": "BaseBdev2", 00:08:57.337 "uuid": "1f417e6e-1a5a-4d82-bda1-bfe43272729d", 00:08:57.337 "is_configured": true, 00:08:57.337 "data_offset": 0, 00:08:57.337 "data_size": 65536 00:08:57.337 }, 00:08:57.337 { 00:08:57.337 "name": "BaseBdev3", 00:08:57.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.337 "is_configured": false, 00:08:57.337 "data_offset": 0, 00:08:57.337 "data_size": 0 00:08:57.337 } 00:08:57.337 ] 00:08:57.337 }' 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.337 15:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.605 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:57.605 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.605 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.605 [2024-11-25 15:35:56.242561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:57.605 [2024-11-25 15:35:56.242604] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:57.605 [2024-11-25 15:35:56.242618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:57.605 [2024-11-25 15:35:56.242895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:57.606 [2024-11-25 15:35:56.243086] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:57.606 [2024-11-25 15:35:56.243097] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:57.606 [2024-11-25 15:35:56.243378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.606 BaseBdev3 00:08:57.606 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.606 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:57.606 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:57.606 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.606 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.606 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.606 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.606 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.606 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.606 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.606 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.606 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:57.606 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.606 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.606 [ 00:08:57.606 { 00:08:57.606 "name": "BaseBdev3", 00:08:57.606 "aliases": [ 00:08:57.606 "642160f7-2cc6-4591-b034-b0410ed8eb60" 00:08:57.606 ], 00:08:57.606 "product_name": "Malloc disk", 00:08:57.606 "block_size": 512, 00:08:57.606 "num_blocks": 65536, 00:08:57.606 "uuid": "642160f7-2cc6-4591-b034-b0410ed8eb60", 00:08:57.606 "assigned_rate_limits": { 00:08:57.606 "rw_ios_per_sec": 0, 00:08:57.606 "rw_mbytes_per_sec": 0, 00:08:57.606 "r_mbytes_per_sec": 0, 00:08:57.606 "w_mbytes_per_sec": 0 00:08:57.606 }, 00:08:57.606 "claimed": true, 00:08:57.606 "claim_type": "exclusive_write", 00:08:57.606 "zoned": false, 00:08:57.606 "supported_io_types": { 00:08:57.606 "read": true, 00:08:57.606 "write": true, 00:08:57.606 "unmap": true, 00:08:57.606 "flush": true, 00:08:57.606 "reset": true, 00:08:57.606 "nvme_admin": false, 00:08:57.606 "nvme_io": false, 00:08:57.606 "nvme_io_md": false, 00:08:57.606 "write_zeroes": true, 00:08:57.606 "zcopy": true, 00:08:57.606 "get_zone_info": false, 00:08:57.606 "zone_management": false, 00:08:57.879 "zone_append": false, 00:08:57.879 "compare": false, 00:08:57.879 "compare_and_write": false, 00:08:57.879 "abort": true, 00:08:57.879 "seek_hole": false, 00:08:57.879 "seek_data": false, 00:08:57.879 "copy": true, 00:08:57.879 "nvme_iov_md": false 00:08:57.879 }, 00:08:57.879 "memory_domains": [ 00:08:57.879 { 00:08:57.879 "dma_device_id": "system", 00:08:57.879 "dma_device_type": 1 00:08:57.879 }, 00:08:57.879 { 00:08:57.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.879 "dma_device_type": 2 00:08:57.879 } 00:08:57.879 ], 00:08:57.879 "driver_specific": {} 00:08:57.879 } 00:08:57.879 ] 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.879 "name": "Existed_Raid", 00:08:57.879 "uuid": "d5116157-a2e1-4ba5-bdc5-086762f4b48a", 00:08:57.879 "strip_size_kb": 64, 00:08:57.879 "state": "online", 00:08:57.879 "raid_level": "raid0", 00:08:57.879 "superblock": false, 00:08:57.879 "num_base_bdevs": 3, 00:08:57.879 "num_base_bdevs_discovered": 3, 00:08:57.879 "num_base_bdevs_operational": 3, 00:08:57.879 "base_bdevs_list": [ 00:08:57.879 { 00:08:57.879 "name": "BaseBdev1", 00:08:57.879 "uuid": "e9139c06-8158-4954-9e12-fe277aa07af5", 00:08:57.879 "is_configured": true, 00:08:57.879 "data_offset": 0, 00:08:57.879 "data_size": 65536 00:08:57.879 }, 00:08:57.879 { 00:08:57.879 "name": "BaseBdev2", 00:08:57.879 "uuid": "1f417e6e-1a5a-4d82-bda1-bfe43272729d", 00:08:57.879 "is_configured": true, 00:08:57.879 "data_offset": 0, 00:08:57.879 "data_size": 65536 00:08:57.879 }, 00:08:57.879 { 00:08:57.879 "name": "BaseBdev3", 00:08:57.879 "uuid": "642160f7-2cc6-4591-b034-b0410ed8eb60", 00:08:57.879 "is_configured": true, 00:08:57.879 "data_offset": 0, 00:08:57.879 "data_size": 65536 00:08:57.879 } 00:08:57.879 ] 00:08:57.879 }' 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.879 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.138 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:58.138 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:58.138 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:58.138 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:58.138 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:58.138 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:58.138 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:58.138 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.138 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.138 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:58.138 [2024-11-25 15:35:56.754233] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.138 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.138 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:58.138 "name": "Existed_Raid", 00:08:58.138 "aliases": [ 00:08:58.138 "d5116157-a2e1-4ba5-bdc5-086762f4b48a" 00:08:58.138 ], 00:08:58.138 "product_name": "Raid Volume", 00:08:58.138 "block_size": 512, 00:08:58.138 "num_blocks": 196608, 00:08:58.138 "uuid": "d5116157-a2e1-4ba5-bdc5-086762f4b48a", 00:08:58.138 "assigned_rate_limits": { 00:08:58.138 "rw_ios_per_sec": 0, 00:08:58.138 "rw_mbytes_per_sec": 0, 00:08:58.138 "r_mbytes_per_sec": 0, 00:08:58.138 "w_mbytes_per_sec": 0 00:08:58.138 }, 00:08:58.138 "claimed": false, 00:08:58.138 "zoned": false, 00:08:58.138 "supported_io_types": { 00:08:58.138 "read": true, 00:08:58.138 "write": true, 00:08:58.138 "unmap": true, 00:08:58.138 "flush": true, 00:08:58.138 "reset": true, 00:08:58.138 "nvme_admin": false, 00:08:58.138 "nvme_io": false, 00:08:58.138 "nvme_io_md": false, 00:08:58.138 "write_zeroes": true, 00:08:58.138 "zcopy": false, 00:08:58.138 "get_zone_info": false, 00:08:58.138 "zone_management": false, 00:08:58.138 "zone_append": false, 00:08:58.138 "compare": false, 00:08:58.138 "compare_and_write": false, 00:08:58.138 "abort": false, 00:08:58.138 "seek_hole": false, 00:08:58.138 "seek_data": false, 00:08:58.138 "copy": false, 00:08:58.138 "nvme_iov_md": false 00:08:58.138 }, 00:08:58.138 "memory_domains": [ 00:08:58.138 { 00:08:58.138 "dma_device_id": "system", 00:08:58.138 "dma_device_type": 1 00:08:58.138 }, 00:08:58.138 { 00:08:58.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.138 "dma_device_type": 2 00:08:58.138 }, 00:08:58.138 { 00:08:58.138 "dma_device_id": "system", 00:08:58.138 "dma_device_type": 1 00:08:58.138 }, 00:08:58.138 { 00:08:58.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.138 "dma_device_type": 2 00:08:58.138 }, 00:08:58.138 { 00:08:58.138 "dma_device_id": "system", 00:08:58.138 "dma_device_type": 1 00:08:58.138 }, 00:08:58.138 { 00:08:58.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.138 "dma_device_type": 2 00:08:58.138 } 00:08:58.138 ], 00:08:58.138 "driver_specific": { 00:08:58.138 "raid": { 00:08:58.138 "uuid": "d5116157-a2e1-4ba5-bdc5-086762f4b48a", 00:08:58.138 "strip_size_kb": 64, 00:08:58.138 "state": "online", 00:08:58.138 "raid_level": "raid0", 00:08:58.138 "superblock": false, 00:08:58.138 "num_base_bdevs": 3, 00:08:58.138 "num_base_bdevs_discovered": 3, 00:08:58.138 "num_base_bdevs_operational": 3, 00:08:58.139 "base_bdevs_list": [ 00:08:58.139 { 00:08:58.139 "name": "BaseBdev1", 00:08:58.139 "uuid": "e9139c06-8158-4954-9e12-fe277aa07af5", 00:08:58.139 "is_configured": true, 00:08:58.139 "data_offset": 0, 00:08:58.139 "data_size": 65536 00:08:58.139 }, 00:08:58.139 { 00:08:58.139 "name": "BaseBdev2", 00:08:58.139 "uuid": "1f417e6e-1a5a-4d82-bda1-bfe43272729d", 00:08:58.139 "is_configured": true, 00:08:58.139 "data_offset": 0, 00:08:58.139 "data_size": 65536 00:08:58.139 }, 00:08:58.139 { 00:08:58.139 "name": "BaseBdev3", 00:08:58.139 "uuid": "642160f7-2cc6-4591-b034-b0410ed8eb60", 00:08:58.139 "is_configured": true, 00:08:58.139 "data_offset": 0, 00:08:58.139 "data_size": 65536 00:08:58.139 } 00:08:58.139 ] 00:08:58.139 } 00:08:58.139 } 00:08:58.139 }' 00:08:58.139 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:58.399 BaseBdev2 00:08:58.399 BaseBdev3' 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.399 15:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.399 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.399 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:58.399 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.399 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.399 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.399 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.399 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.399 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:58.399 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.399 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.399 [2024-11-25 15:35:57.061426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:58.399 [2024-11-25 15:35:57.061455] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.399 [2024-11-25 15:35:57.061508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.659 "name": "Existed_Raid", 00:08:58.659 "uuid": "d5116157-a2e1-4ba5-bdc5-086762f4b48a", 00:08:58.659 "strip_size_kb": 64, 00:08:58.659 "state": "offline", 00:08:58.659 "raid_level": "raid0", 00:08:58.659 "superblock": false, 00:08:58.659 "num_base_bdevs": 3, 00:08:58.659 "num_base_bdevs_discovered": 2, 00:08:58.659 "num_base_bdevs_operational": 2, 00:08:58.659 "base_bdevs_list": [ 00:08:58.659 { 00:08:58.659 "name": null, 00:08:58.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.659 "is_configured": false, 00:08:58.659 "data_offset": 0, 00:08:58.659 "data_size": 65536 00:08:58.659 }, 00:08:58.659 { 00:08:58.659 "name": "BaseBdev2", 00:08:58.659 "uuid": "1f417e6e-1a5a-4d82-bda1-bfe43272729d", 00:08:58.659 "is_configured": true, 00:08:58.659 "data_offset": 0, 00:08:58.659 "data_size": 65536 00:08:58.659 }, 00:08:58.659 { 00:08:58.659 "name": "BaseBdev3", 00:08:58.659 "uuid": "642160f7-2cc6-4591-b034-b0410ed8eb60", 00:08:58.659 "is_configured": true, 00:08:58.659 "data_offset": 0, 00:08:58.659 "data_size": 65536 00:08:58.659 } 00:08:58.659 ] 00:08:58.659 }' 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.659 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.228 [2024-11-25 15:35:57.672788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.228 [2024-11-25 15:35:57.813678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:59.228 [2024-11-25 15:35:57.813778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:59.228 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:59.489 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.489 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:59.489 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.489 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.489 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.489 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:59.489 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:59.489 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:59.489 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:59.489 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:59.489 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:59.489 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.489 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.489 BaseBdev2 00:08:59.490 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.490 15:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:59.490 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:59.490 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.490 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:59.490 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.490 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.490 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.490 15:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.490 [ 00:08:59.490 { 00:08:59.490 "name": "BaseBdev2", 00:08:59.490 "aliases": [ 00:08:59.490 "87df2f39-4994-44da-8b1d-03018eee1447" 00:08:59.490 ], 00:08:59.490 "product_name": "Malloc disk", 00:08:59.490 "block_size": 512, 00:08:59.490 "num_blocks": 65536, 00:08:59.490 "uuid": "87df2f39-4994-44da-8b1d-03018eee1447", 00:08:59.490 "assigned_rate_limits": { 00:08:59.490 "rw_ios_per_sec": 0, 00:08:59.490 "rw_mbytes_per_sec": 0, 00:08:59.490 "r_mbytes_per_sec": 0, 00:08:59.490 "w_mbytes_per_sec": 0 00:08:59.490 }, 00:08:59.490 "claimed": false, 00:08:59.490 "zoned": false, 00:08:59.490 "supported_io_types": { 00:08:59.490 "read": true, 00:08:59.490 "write": true, 00:08:59.490 "unmap": true, 00:08:59.490 "flush": true, 00:08:59.490 "reset": true, 00:08:59.490 "nvme_admin": false, 00:08:59.490 "nvme_io": false, 00:08:59.490 "nvme_io_md": false, 00:08:59.490 "write_zeroes": true, 00:08:59.490 "zcopy": true, 00:08:59.490 "get_zone_info": false, 00:08:59.490 "zone_management": false, 00:08:59.490 "zone_append": false, 00:08:59.490 "compare": false, 00:08:59.490 "compare_and_write": false, 00:08:59.490 "abort": true, 00:08:59.490 "seek_hole": false, 00:08:59.490 "seek_data": false, 00:08:59.490 "copy": true, 00:08:59.490 "nvme_iov_md": false 00:08:59.490 }, 00:08:59.490 "memory_domains": [ 00:08:59.490 { 00:08:59.490 "dma_device_id": "system", 00:08:59.490 "dma_device_type": 1 00:08:59.490 }, 00:08:59.490 { 00:08:59.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.490 "dma_device_type": 2 00:08:59.490 } 00:08:59.490 ], 00:08:59.490 "driver_specific": {} 00:08:59.490 } 00:08:59.490 ] 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.490 BaseBdev3 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.490 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.490 [ 00:08:59.490 { 00:08:59.490 "name": "BaseBdev3", 00:08:59.490 "aliases": [ 00:08:59.490 "6f81af30-460a-4156-93fa-0b02181db461" 00:08:59.490 ], 00:08:59.490 "product_name": "Malloc disk", 00:08:59.490 "block_size": 512, 00:08:59.490 "num_blocks": 65536, 00:08:59.490 "uuid": "6f81af30-460a-4156-93fa-0b02181db461", 00:08:59.490 "assigned_rate_limits": { 00:08:59.490 "rw_ios_per_sec": 0, 00:08:59.490 "rw_mbytes_per_sec": 0, 00:08:59.490 "r_mbytes_per_sec": 0, 00:08:59.490 "w_mbytes_per_sec": 0 00:08:59.490 }, 00:08:59.490 "claimed": false, 00:08:59.490 "zoned": false, 00:08:59.490 "supported_io_types": { 00:08:59.490 "read": true, 00:08:59.490 "write": true, 00:08:59.490 "unmap": true, 00:08:59.490 "flush": true, 00:08:59.490 "reset": true, 00:08:59.490 "nvme_admin": false, 00:08:59.490 "nvme_io": false, 00:08:59.490 "nvme_io_md": false, 00:08:59.490 "write_zeroes": true, 00:08:59.490 "zcopy": true, 00:08:59.490 "get_zone_info": false, 00:08:59.490 "zone_management": false, 00:08:59.490 "zone_append": false, 00:08:59.490 "compare": false, 00:08:59.490 "compare_and_write": false, 00:08:59.490 "abort": true, 00:08:59.491 "seek_hole": false, 00:08:59.491 "seek_data": false, 00:08:59.491 "copy": true, 00:08:59.491 "nvme_iov_md": false 00:08:59.491 }, 00:08:59.491 "memory_domains": [ 00:08:59.491 { 00:08:59.491 "dma_device_id": "system", 00:08:59.491 "dma_device_type": 1 00:08:59.491 }, 00:08:59.491 { 00:08:59.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.491 "dma_device_type": 2 00:08:59.491 } 00:08:59.491 ], 00:08:59.491 "driver_specific": {} 00:08:59.491 } 00:08:59.491 ] 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.491 [2024-11-25 15:35:58.118996] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:59.491 [2024-11-25 15:35:58.119091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:59.491 [2024-11-25 15:35:58.119141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.491 [2024-11-25 15:35:58.120918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.491 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.751 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.751 "name": "Existed_Raid", 00:08:59.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.751 "strip_size_kb": 64, 00:08:59.751 "state": "configuring", 00:08:59.751 "raid_level": "raid0", 00:08:59.751 "superblock": false, 00:08:59.751 "num_base_bdevs": 3, 00:08:59.751 "num_base_bdevs_discovered": 2, 00:08:59.751 "num_base_bdevs_operational": 3, 00:08:59.751 "base_bdevs_list": [ 00:08:59.751 { 00:08:59.751 "name": "BaseBdev1", 00:08:59.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.751 "is_configured": false, 00:08:59.751 "data_offset": 0, 00:08:59.751 "data_size": 0 00:08:59.751 }, 00:08:59.751 { 00:08:59.751 "name": "BaseBdev2", 00:08:59.751 "uuid": "87df2f39-4994-44da-8b1d-03018eee1447", 00:08:59.751 "is_configured": true, 00:08:59.751 "data_offset": 0, 00:08:59.751 "data_size": 65536 00:08:59.751 }, 00:08:59.751 { 00:08:59.751 "name": "BaseBdev3", 00:08:59.751 "uuid": "6f81af30-460a-4156-93fa-0b02181db461", 00:08:59.751 "is_configured": true, 00:08:59.751 "data_offset": 0, 00:08:59.751 "data_size": 65536 00:08:59.751 } 00:08:59.751 ] 00:08:59.751 }' 00:08:59.751 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.751 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.012 [2024-11-25 15:35:58.510478] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.012 "name": "Existed_Raid", 00:09:00.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.012 "strip_size_kb": 64, 00:09:00.012 "state": "configuring", 00:09:00.012 "raid_level": "raid0", 00:09:00.012 "superblock": false, 00:09:00.012 "num_base_bdevs": 3, 00:09:00.012 "num_base_bdevs_discovered": 1, 00:09:00.012 "num_base_bdevs_operational": 3, 00:09:00.012 "base_bdevs_list": [ 00:09:00.012 { 00:09:00.012 "name": "BaseBdev1", 00:09:00.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.012 "is_configured": false, 00:09:00.012 "data_offset": 0, 00:09:00.012 "data_size": 0 00:09:00.012 }, 00:09:00.012 { 00:09:00.012 "name": null, 00:09:00.012 "uuid": "87df2f39-4994-44da-8b1d-03018eee1447", 00:09:00.012 "is_configured": false, 00:09:00.012 "data_offset": 0, 00:09:00.012 "data_size": 65536 00:09:00.012 }, 00:09:00.012 { 00:09:00.012 "name": "BaseBdev3", 00:09:00.012 "uuid": "6f81af30-460a-4156-93fa-0b02181db461", 00:09:00.012 "is_configured": true, 00:09:00.012 "data_offset": 0, 00:09:00.012 "data_size": 65536 00:09:00.012 } 00:09:00.012 ] 00:09:00.012 }' 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.012 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.271 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:00.271 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.272 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.272 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.272 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.272 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:00.272 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:00.272 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.272 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.531 [2024-11-25 15:35:58.977730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.531 BaseBdev1 00:09:00.531 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.531 15:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:00.531 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:00.531 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.532 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:00.532 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.532 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.532 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.532 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.532 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.532 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.532 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:00.532 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.532 15:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.532 [ 00:09:00.532 { 00:09:00.532 "name": "BaseBdev1", 00:09:00.532 "aliases": [ 00:09:00.532 "13ca99a9-a32a-427e-8ec0-f4bbb2fdffbb" 00:09:00.532 ], 00:09:00.532 "product_name": "Malloc disk", 00:09:00.532 "block_size": 512, 00:09:00.532 "num_blocks": 65536, 00:09:00.532 "uuid": "13ca99a9-a32a-427e-8ec0-f4bbb2fdffbb", 00:09:00.532 "assigned_rate_limits": { 00:09:00.532 "rw_ios_per_sec": 0, 00:09:00.532 "rw_mbytes_per_sec": 0, 00:09:00.532 "r_mbytes_per_sec": 0, 00:09:00.532 "w_mbytes_per_sec": 0 00:09:00.532 }, 00:09:00.532 "claimed": true, 00:09:00.532 "claim_type": "exclusive_write", 00:09:00.532 "zoned": false, 00:09:00.532 "supported_io_types": { 00:09:00.532 "read": true, 00:09:00.532 "write": true, 00:09:00.532 "unmap": true, 00:09:00.532 "flush": true, 00:09:00.532 "reset": true, 00:09:00.532 "nvme_admin": false, 00:09:00.532 "nvme_io": false, 00:09:00.532 "nvme_io_md": false, 00:09:00.532 "write_zeroes": true, 00:09:00.532 "zcopy": true, 00:09:00.532 "get_zone_info": false, 00:09:00.532 "zone_management": false, 00:09:00.532 "zone_append": false, 00:09:00.532 "compare": false, 00:09:00.532 "compare_and_write": false, 00:09:00.532 "abort": true, 00:09:00.532 "seek_hole": false, 00:09:00.532 "seek_data": false, 00:09:00.532 "copy": true, 00:09:00.532 "nvme_iov_md": false 00:09:00.532 }, 00:09:00.532 "memory_domains": [ 00:09:00.532 { 00:09:00.532 "dma_device_id": "system", 00:09:00.532 "dma_device_type": 1 00:09:00.532 }, 00:09:00.532 { 00:09:00.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.532 "dma_device_type": 2 00:09:00.532 } 00:09:00.532 ], 00:09:00.532 "driver_specific": {} 00:09:00.532 } 00:09:00.532 ] 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.532 "name": "Existed_Raid", 00:09:00.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.532 "strip_size_kb": 64, 00:09:00.532 "state": "configuring", 00:09:00.532 "raid_level": "raid0", 00:09:00.532 "superblock": false, 00:09:00.532 "num_base_bdevs": 3, 00:09:00.532 "num_base_bdevs_discovered": 2, 00:09:00.532 "num_base_bdevs_operational": 3, 00:09:00.532 "base_bdevs_list": [ 00:09:00.532 { 00:09:00.532 "name": "BaseBdev1", 00:09:00.532 "uuid": "13ca99a9-a32a-427e-8ec0-f4bbb2fdffbb", 00:09:00.532 "is_configured": true, 00:09:00.532 "data_offset": 0, 00:09:00.532 "data_size": 65536 00:09:00.532 }, 00:09:00.532 { 00:09:00.532 "name": null, 00:09:00.532 "uuid": "87df2f39-4994-44da-8b1d-03018eee1447", 00:09:00.532 "is_configured": false, 00:09:00.532 "data_offset": 0, 00:09:00.532 "data_size": 65536 00:09:00.532 }, 00:09:00.532 { 00:09:00.532 "name": "BaseBdev3", 00:09:00.532 "uuid": "6f81af30-460a-4156-93fa-0b02181db461", 00:09:00.532 "is_configured": true, 00:09:00.532 "data_offset": 0, 00:09:00.532 "data_size": 65536 00:09:00.532 } 00:09:00.532 ] 00:09:00.532 }' 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.532 15:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.102 [2024-11-25 15:35:59.532853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.102 "name": "Existed_Raid", 00:09:01.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.102 "strip_size_kb": 64, 00:09:01.102 "state": "configuring", 00:09:01.102 "raid_level": "raid0", 00:09:01.102 "superblock": false, 00:09:01.102 "num_base_bdevs": 3, 00:09:01.102 "num_base_bdevs_discovered": 1, 00:09:01.102 "num_base_bdevs_operational": 3, 00:09:01.102 "base_bdevs_list": [ 00:09:01.102 { 00:09:01.102 "name": "BaseBdev1", 00:09:01.102 "uuid": "13ca99a9-a32a-427e-8ec0-f4bbb2fdffbb", 00:09:01.102 "is_configured": true, 00:09:01.102 "data_offset": 0, 00:09:01.102 "data_size": 65536 00:09:01.102 }, 00:09:01.102 { 00:09:01.102 "name": null, 00:09:01.102 "uuid": "87df2f39-4994-44da-8b1d-03018eee1447", 00:09:01.102 "is_configured": false, 00:09:01.102 "data_offset": 0, 00:09:01.102 "data_size": 65536 00:09:01.102 }, 00:09:01.102 { 00:09:01.102 "name": null, 00:09:01.102 "uuid": "6f81af30-460a-4156-93fa-0b02181db461", 00:09:01.102 "is_configured": false, 00:09:01.102 "data_offset": 0, 00:09:01.102 "data_size": 65536 00:09:01.102 } 00:09:01.102 ] 00:09:01.102 }' 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.102 15:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.362 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:01.362 15:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.362 15:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.362 15:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.362 15:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.362 [2024-11-25 15:36:00.008085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.362 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.622 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.622 "name": "Existed_Raid", 00:09:01.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.622 "strip_size_kb": 64, 00:09:01.622 "state": "configuring", 00:09:01.622 "raid_level": "raid0", 00:09:01.622 "superblock": false, 00:09:01.622 "num_base_bdevs": 3, 00:09:01.622 "num_base_bdevs_discovered": 2, 00:09:01.622 "num_base_bdevs_operational": 3, 00:09:01.622 "base_bdevs_list": [ 00:09:01.622 { 00:09:01.622 "name": "BaseBdev1", 00:09:01.622 "uuid": "13ca99a9-a32a-427e-8ec0-f4bbb2fdffbb", 00:09:01.622 "is_configured": true, 00:09:01.622 "data_offset": 0, 00:09:01.622 "data_size": 65536 00:09:01.622 }, 00:09:01.622 { 00:09:01.622 "name": null, 00:09:01.622 "uuid": "87df2f39-4994-44da-8b1d-03018eee1447", 00:09:01.622 "is_configured": false, 00:09:01.622 "data_offset": 0, 00:09:01.622 "data_size": 65536 00:09:01.622 }, 00:09:01.622 { 00:09:01.622 "name": "BaseBdev3", 00:09:01.622 "uuid": "6f81af30-460a-4156-93fa-0b02181db461", 00:09:01.622 "is_configured": true, 00:09:01.622 "data_offset": 0, 00:09:01.622 "data_size": 65536 00:09:01.622 } 00:09:01.622 ] 00:09:01.622 }' 00:09:01.622 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.622 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.882 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:01.882 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.882 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.882 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.882 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.882 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:01.882 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:01.882 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.882 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.882 [2024-11-25 15:36:00.459265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:02.142 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.142 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:02.142 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.142 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.142 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.142 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.142 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.142 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.142 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.142 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.142 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.142 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.142 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.142 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.142 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.142 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.142 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.142 "name": "Existed_Raid", 00:09:02.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.142 "strip_size_kb": 64, 00:09:02.142 "state": "configuring", 00:09:02.142 "raid_level": "raid0", 00:09:02.142 "superblock": false, 00:09:02.142 "num_base_bdevs": 3, 00:09:02.142 "num_base_bdevs_discovered": 1, 00:09:02.142 "num_base_bdevs_operational": 3, 00:09:02.142 "base_bdevs_list": [ 00:09:02.142 { 00:09:02.142 "name": null, 00:09:02.142 "uuid": "13ca99a9-a32a-427e-8ec0-f4bbb2fdffbb", 00:09:02.142 "is_configured": false, 00:09:02.142 "data_offset": 0, 00:09:02.142 "data_size": 65536 00:09:02.142 }, 00:09:02.142 { 00:09:02.142 "name": null, 00:09:02.142 "uuid": "87df2f39-4994-44da-8b1d-03018eee1447", 00:09:02.142 "is_configured": false, 00:09:02.142 "data_offset": 0, 00:09:02.142 "data_size": 65536 00:09:02.142 }, 00:09:02.142 { 00:09:02.142 "name": "BaseBdev3", 00:09:02.142 "uuid": "6f81af30-460a-4156-93fa-0b02181db461", 00:09:02.142 "is_configured": true, 00:09:02.142 "data_offset": 0, 00:09:02.142 "data_size": 65536 00:09:02.142 } 00:09:02.142 ] 00:09:02.142 }' 00:09:02.142 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.142 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.403 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.403 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.403 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.403 15:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:02.403 15:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.403 [2024-11-25 15:36:01.033184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.403 "name": "Existed_Raid", 00:09:02.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.403 "strip_size_kb": 64, 00:09:02.403 "state": "configuring", 00:09:02.403 "raid_level": "raid0", 00:09:02.403 "superblock": false, 00:09:02.403 "num_base_bdevs": 3, 00:09:02.403 "num_base_bdevs_discovered": 2, 00:09:02.403 "num_base_bdevs_operational": 3, 00:09:02.403 "base_bdevs_list": [ 00:09:02.403 { 00:09:02.403 "name": null, 00:09:02.403 "uuid": "13ca99a9-a32a-427e-8ec0-f4bbb2fdffbb", 00:09:02.403 "is_configured": false, 00:09:02.403 "data_offset": 0, 00:09:02.403 "data_size": 65536 00:09:02.403 }, 00:09:02.403 { 00:09:02.403 "name": "BaseBdev2", 00:09:02.403 "uuid": "87df2f39-4994-44da-8b1d-03018eee1447", 00:09:02.403 "is_configured": true, 00:09:02.403 "data_offset": 0, 00:09:02.403 "data_size": 65536 00:09:02.403 }, 00:09:02.403 { 00:09:02.403 "name": "BaseBdev3", 00:09:02.403 "uuid": "6f81af30-460a-4156-93fa-0b02181db461", 00:09:02.403 "is_configured": true, 00:09:02.403 "data_offset": 0, 00:09:02.403 "data_size": 65536 00:09:02.403 } 00:09:02.403 ] 00:09:02.403 }' 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.403 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.971 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.971 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.971 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.971 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:02.971 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.971 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:02.971 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.971 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.971 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.971 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:02.971 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.971 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 13ca99a9-a32a-427e-8ec0-f4bbb2fdffbb 00:09:02.971 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.971 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.971 [2024-11-25 15:36:01.566148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:02.971 [2024-11-25 15:36:01.566198] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:02.971 [2024-11-25 15:36:01.566208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:02.971 [2024-11-25 15:36:01.566536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:02.971 [2024-11-25 15:36:01.566717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:02.971 [2024-11-25 15:36:01.566727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:02.971 [2024-11-25 15:36:01.566995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.971 NewBaseBdev 00:09:02.971 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.971 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:02.971 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:02.971 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.971 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.972 [ 00:09:02.972 { 00:09:02.972 "name": "NewBaseBdev", 00:09:02.972 "aliases": [ 00:09:02.972 "13ca99a9-a32a-427e-8ec0-f4bbb2fdffbb" 00:09:02.972 ], 00:09:02.972 "product_name": "Malloc disk", 00:09:02.972 "block_size": 512, 00:09:02.972 "num_blocks": 65536, 00:09:02.972 "uuid": "13ca99a9-a32a-427e-8ec0-f4bbb2fdffbb", 00:09:02.972 "assigned_rate_limits": { 00:09:02.972 "rw_ios_per_sec": 0, 00:09:02.972 "rw_mbytes_per_sec": 0, 00:09:02.972 "r_mbytes_per_sec": 0, 00:09:02.972 "w_mbytes_per_sec": 0 00:09:02.972 }, 00:09:02.972 "claimed": true, 00:09:02.972 "claim_type": "exclusive_write", 00:09:02.972 "zoned": false, 00:09:02.972 "supported_io_types": { 00:09:02.972 "read": true, 00:09:02.972 "write": true, 00:09:02.972 "unmap": true, 00:09:02.972 "flush": true, 00:09:02.972 "reset": true, 00:09:02.972 "nvme_admin": false, 00:09:02.972 "nvme_io": false, 00:09:02.972 "nvme_io_md": false, 00:09:02.972 "write_zeroes": true, 00:09:02.972 "zcopy": true, 00:09:02.972 "get_zone_info": false, 00:09:02.972 "zone_management": false, 00:09:02.972 "zone_append": false, 00:09:02.972 "compare": false, 00:09:02.972 "compare_and_write": false, 00:09:02.972 "abort": true, 00:09:02.972 "seek_hole": false, 00:09:02.972 "seek_data": false, 00:09:02.972 "copy": true, 00:09:02.972 "nvme_iov_md": false 00:09:02.972 }, 00:09:02.972 "memory_domains": [ 00:09:02.972 { 00:09:02.972 "dma_device_id": "system", 00:09:02.972 "dma_device_type": 1 00:09:02.972 }, 00:09:02.972 { 00:09:02.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.972 "dma_device_type": 2 00:09:02.972 } 00:09:02.972 ], 00:09:02.972 "driver_specific": {} 00:09:02.972 } 00:09:02.972 ] 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.972 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.231 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.231 "name": "Existed_Raid", 00:09:03.231 "uuid": "2d74f2df-17b6-4348-89eb-c69dd3e43093", 00:09:03.231 "strip_size_kb": 64, 00:09:03.231 "state": "online", 00:09:03.231 "raid_level": "raid0", 00:09:03.231 "superblock": false, 00:09:03.231 "num_base_bdevs": 3, 00:09:03.231 "num_base_bdevs_discovered": 3, 00:09:03.231 "num_base_bdevs_operational": 3, 00:09:03.231 "base_bdevs_list": [ 00:09:03.231 { 00:09:03.231 "name": "NewBaseBdev", 00:09:03.231 "uuid": "13ca99a9-a32a-427e-8ec0-f4bbb2fdffbb", 00:09:03.231 "is_configured": true, 00:09:03.231 "data_offset": 0, 00:09:03.231 "data_size": 65536 00:09:03.231 }, 00:09:03.231 { 00:09:03.231 "name": "BaseBdev2", 00:09:03.231 "uuid": "87df2f39-4994-44da-8b1d-03018eee1447", 00:09:03.231 "is_configured": true, 00:09:03.231 "data_offset": 0, 00:09:03.231 "data_size": 65536 00:09:03.231 }, 00:09:03.231 { 00:09:03.231 "name": "BaseBdev3", 00:09:03.231 "uuid": "6f81af30-460a-4156-93fa-0b02181db461", 00:09:03.231 "is_configured": true, 00:09:03.231 "data_offset": 0, 00:09:03.231 "data_size": 65536 00:09:03.231 } 00:09:03.231 ] 00:09:03.231 }' 00:09:03.231 15:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.231 15:36:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.490 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:03.490 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:03.490 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:03.491 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:03.491 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:03.491 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:03.491 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:03.491 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:03.491 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.491 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.491 [2024-11-25 15:36:02.033929] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.491 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.491 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:03.491 "name": "Existed_Raid", 00:09:03.491 "aliases": [ 00:09:03.491 "2d74f2df-17b6-4348-89eb-c69dd3e43093" 00:09:03.491 ], 00:09:03.491 "product_name": "Raid Volume", 00:09:03.491 "block_size": 512, 00:09:03.491 "num_blocks": 196608, 00:09:03.491 "uuid": "2d74f2df-17b6-4348-89eb-c69dd3e43093", 00:09:03.491 "assigned_rate_limits": { 00:09:03.491 "rw_ios_per_sec": 0, 00:09:03.491 "rw_mbytes_per_sec": 0, 00:09:03.491 "r_mbytes_per_sec": 0, 00:09:03.491 "w_mbytes_per_sec": 0 00:09:03.491 }, 00:09:03.491 "claimed": false, 00:09:03.491 "zoned": false, 00:09:03.491 "supported_io_types": { 00:09:03.491 "read": true, 00:09:03.491 "write": true, 00:09:03.491 "unmap": true, 00:09:03.491 "flush": true, 00:09:03.491 "reset": true, 00:09:03.491 "nvme_admin": false, 00:09:03.491 "nvme_io": false, 00:09:03.491 "nvme_io_md": false, 00:09:03.491 "write_zeroes": true, 00:09:03.491 "zcopy": false, 00:09:03.491 "get_zone_info": false, 00:09:03.491 "zone_management": false, 00:09:03.491 "zone_append": false, 00:09:03.491 "compare": false, 00:09:03.491 "compare_and_write": false, 00:09:03.491 "abort": false, 00:09:03.491 "seek_hole": false, 00:09:03.491 "seek_data": false, 00:09:03.491 "copy": false, 00:09:03.491 "nvme_iov_md": false 00:09:03.491 }, 00:09:03.491 "memory_domains": [ 00:09:03.491 { 00:09:03.491 "dma_device_id": "system", 00:09:03.491 "dma_device_type": 1 00:09:03.491 }, 00:09:03.491 { 00:09:03.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.491 "dma_device_type": 2 00:09:03.491 }, 00:09:03.491 { 00:09:03.491 "dma_device_id": "system", 00:09:03.491 "dma_device_type": 1 00:09:03.491 }, 00:09:03.491 { 00:09:03.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.491 "dma_device_type": 2 00:09:03.491 }, 00:09:03.491 { 00:09:03.491 "dma_device_id": "system", 00:09:03.491 "dma_device_type": 1 00:09:03.491 }, 00:09:03.491 { 00:09:03.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.491 "dma_device_type": 2 00:09:03.491 } 00:09:03.491 ], 00:09:03.491 "driver_specific": { 00:09:03.491 "raid": { 00:09:03.491 "uuid": "2d74f2df-17b6-4348-89eb-c69dd3e43093", 00:09:03.491 "strip_size_kb": 64, 00:09:03.491 "state": "online", 00:09:03.491 "raid_level": "raid0", 00:09:03.491 "superblock": false, 00:09:03.491 "num_base_bdevs": 3, 00:09:03.491 "num_base_bdevs_discovered": 3, 00:09:03.491 "num_base_bdevs_operational": 3, 00:09:03.491 "base_bdevs_list": [ 00:09:03.491 { 00:09:03.491 "name": "NewBaseBdev", 00:09:03.491 "uuid": "13ca99a9-a32a-427e-8ec0-f4bbb2fdffbb", 00:09:03.491 "is_configured": true, 00:09:03.491 "data_offset": 0, 00:09:03.491 "data_size": 65536 00:09:03.491 }, 00:09:03.491 { 00:09:03.491 "name": "BaseBdev2", 00:09:03.491 "uuid": "87df2f39-4994-44da-8b1d-03018eee1447", 00:09:03.491 "is_configured": true, 00:09:03.491 "data_offset": 0, 00:09:03.491 "data_size": 65536 00:09:03.491 }, 00:09:03.491 { 00:09:03.491 "name": "BaseBdev3", 00:09:03.491 "uuid": "6f81af30-460a-4156-93fa-0b02181db461", 00:09:03.491 "is_configured": true, 00:09:03.491 "data_offset": 0, 00:09:03.491 "data_size": 65536 00:09:03.491 } 00:09:03.491 ] 00:09:03.491 } 00:09:03.491 } 00:09:03.491 }' 00:09:03.491 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:03.491 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:03.491 BaseBdev2 00:09:03.491 BaseBdev3' 00:09:03.491 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.779 [2024-11-25 15:36:02.325153] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:03.779 [2024-11-25 15:36:02.325286] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.779 [2024-11-25 15:36:02.325391] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.779 [2024-11-25 15:36:02.325459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.779 [2024-11-25 15:36:02.325474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63597 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63597 ']' 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63597 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63597 00:09:03.779 killing process with pid 63597 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63597' 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63597 00:09:03.779 [2024-11-25 15:36:02.374027] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:03.779 15:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63597 00:09:04.038 [2024-11-25 15:36:02.705169] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:05.413 15:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:05.414 00:09:05.414 real 0m10.641s 00:09:05.414 user 0m16.807s 00:09:05.414 sys 0m1.826s 00:09:05.414 ************************************ 00:09:05.414 END TEST raid_state_function_test 00:09:05.414 ************************************ 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.414 15:36:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:05.414 15:36:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:05.414 15:36:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.414 15:36:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:05.414 ************************************ 00:09:05.414 START TEST raid_state_function_test_sb 00:09:05.414 ************************************ 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64224 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64224' 00:09:05.414 Process raid pid: 64224 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64224 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64224 ']' 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.414 15:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.414 [2024-11-25 15:36:04.073826] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:09:05.414 [2024-11-25 15:36:04.073944] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.674 [2024-11-25 15:36:04.231445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.934 [2024-11-25 15:36:04.369995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.934 [2024-11-25 15:36:04.607846] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.934 [2024-11-25 15:36:04.607893] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.503 15:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.503 15:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:06.503 15:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:06.503 15:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.503 15:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.503 [2024-11-25 15:36:04.910253] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:06.503 [2024-11-25 15:36:04.910433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:06.503 [2024-11-25 15:36:04.910463] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.503 [2024-11-25 15:36:04.910490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.503 [2024-11-25 15:36:04.910497] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:06.503 [2024-11-25 15:36:04.910506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:06.503 15:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.503 15:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:06.503 15:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.503 15:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.503 15:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.503 15:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.504 15:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.504 15:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.504 15:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.504 15:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.504 15:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.504 15:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.504 15:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.504 15:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.504 15:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.504 15:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.504 15:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.504 "name": "Existed_Raid", 00:09:06.504 "uuid": "b03fbd04-a7cf-4da5-a648-ff9899b3c1fd", 00:09:06.504 "strip_size_kb": 64, 00:09:06.504 "state": "configuring", 00:09:06.504 "raid_level": "raid0", 00:09:06.504 "superblock": true, 00:09:06.504 "num_base_bdevs": 3, 00:09:06.504 "num_base_bdevs_discovered": 0, 00:09:06.504 "num_base_bdevs_operational": 3, 00:09:06.504 "base_bdevs_list": [ 00:09:06.504 { 00:09:06.504 "name": "BaseBdev1", 00:09:06.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.504 "is_configured": false, 00:09:06.504 "data_offset": 0, 00:09:06.504 "data_size": 0 00:09:06.504 }, 00:09:06.504 { 00:09:06.504 "name": "BaseBdev2", 00:09:06.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.504 "is_configured": false, 00:09:06.504 "data_offset": 0, 00:09:06.504 "data_size": 0 00:09:06.504 }, 00:09:06.504 { 00:09:06.504 "name": "BaseBdev3", 00:09:06.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.504 "is_configured": false, 00:09:06.504 "data_offset": 0, 00:09:06.504 "data_size": 0 00:09:06.504 } 00:09:06.504 ] 00:09:06.504 }' 00:09:06.504 15:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.504 15:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.764 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:06.764 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.764 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.764 [2024-11-25 15:36:05.365456] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:06.764 [2024-11-25 15:36:05.365595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:06.764 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.764 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:06.764 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.764 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.764 [2024-11-25 15:36:05.373418] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:06.764 [2024-11-25 15:36:05.373522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:06.764 [2024-11-25 15:36:05.373549] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.764 [2024-11-25 15:36:05.373572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.764 [2024-11-25 15:36:05.373589] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:06.764 [2024-11-25 15:36:05.373610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:06.764 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.765 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:06.765 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.765 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.765 [2024-11-25 15:36:05.424308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.765 BaseBdev1 00:09:06.765 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.765 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:06.765 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:06.765 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.765 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:06.765 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.765 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.765 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:06.765 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.765 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.765 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.765 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:06.765 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.765 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.025 [ 00:09:07.025 { 00:09:07.025 "name": "BaseBdev1", 00:09:07.025 "aliases": [ 00:09:07.025 "56ca30bb-2521-4ec1-9f20-8d89d82bf6f0" 00:09:07.025 ], 00:09:07.025 "product_name": "Malloc disk", 00:09:07.025 "block_size": 512, 00:09:07.025 "num_blocks": 65536, 00:09:07.025 "uuid": "56ca30bb-2521-4ec1-9f20-8d89d82bf6f0", 00:09:07.025 "assigned_rate_limits": { 00:09:07.026 "rw_ios_per_sec": 0, 00:09:07.026 "rw_mbytes_per_sec": 0, 00:09:07.026 "r_mbytes_per_sec": 0, 00:09:07.026 "w_mbytes_per_sec": 0 00:09:07.026 }, 00:09:07.026 "claimed": true, 00:09:07.026 "claim_type": "exclusive_write", 00:09:07.026 "zoned": false, 00:09:07.026 "supported_io_types": { 00:09:07.026 "read": true, 00:09:07.026 "write": true, 00:09:07.026 "unmap": true, 00:09:07.026 "flush": true, 00:09:07.026 "reset": true, 00:09:07.026 "nvme_admin": false, 00:09:07.026 "nvme_io": false, 00:09:07.026 "nvme_io_md": false, 00:09:07.026 "write_zeroes": true, 00:09:07.026 "zcopy": true, 00:09:07.026 "get_zone_info": false, 00:09:07.026 "zone_management": false, 00:09:07.026 "zone_append": false, 00:09:07.026 "compare": false, 00:09:07.026 "compare_and_write": false, 00:09:07.026 "abort": true, 00:09:07.026 "seek_hole": false, 00:09:07.026 "seek_data": false, 00:09:07.026 "copy": true, 00:09:07.026 "nvme_iov_md": false 00:09:07.026 }, 00:09:07.026 "memory_domains": [ 00:09:07.026 { 00:09:07.026 "dma_device_id": "system", 00:09:07.026 "dma_device_type": 1 00:09:07.026 }, 00:09:07.026 { 00:09:07.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.026 "dma_device_type": 2 00:09:07.026 } 00:09:07.026 ], 00:09:07.026 "driver_specific": {} 00:09:07.026 } 00:09:07.026 ] 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.026 "name": "Existed_Raid", 00:09:07.026 "uuid": "b73f68b8-1ab3-4257-b4a0-5b1423f092d3", 00:09:07.026 "strip_size_kb": 64, 00:09:07.026 "state": "configuring", 00:09:07.026 "raid_level": "raid0", 00:09:07.026 "superblock": true, 00:09:07.026 "num_base_bdevs": 3, 00:09:07.026 "num_base_bdevs_discovered": 1, 00:09:07.026 "num_base_bdevs_operational": 3, 00:09:07.026 "base_bdevs_list": [ 00:09:07.026 { 00:09:07.026 "name": "BaseBdev1", 00:09:07.026 "uuid": "56ca30bb-2521-4ec1-9f20-8d89d82bf6f0", 00:09:07.026 "is_configured": true, 00:09:07.026 "data_offset": 2048, 00:09:07.026 "data_size": 63488 00:09:07.026 }, 00:09:07.026 { 00:09:07.026 "name": "BaseBdev2", 00:09:07.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.026 "is_configured": false, 00:09:07.026 "data_offset": 0, 00:09:07.026 "data_size": 0 00:09:07.026 }, 00:09:07.026 { 00:09:07.026 "name": "BaseBdev3", 00:09:07.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.026 "is_configured": false, 00:09:07.026 "data_offset": 0, 00:09:07.026 "data_size": 0 00:09:07.026 } 00:09:07.026 ] 00:09:07.026 }' 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.026 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.286 [2024-11-25 15:36:05.911563] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.286 [2024-11-25 15:36:05.911734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.286 [2024-11-25 15:36:05.923565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.286 [2024-11-25 15:36:05.925794] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.286 [2024-11-25 15:36:05.925870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.286 [2024-11-25 15:36:05.925899] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:07.286 [2024-11-25 15:36:05.925920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.286 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.287 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.287 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.287 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.287 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.287 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.547 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.547 "name": "Existed_Raid", 00:09:07.547 "uuid": "1d37aa22-b397-400b-b23a-d9b3b31fb880", 00:09:07.547 "strip_size_kb": 64, 00:09:07.547 "state": "configuring", 00:09:07.547 "raid_level": "raid0", 00:09:07.547 "superblock": true, 00:09:07.547 "num_base_bdevs": 3, 00:09:07.547 "num_base_bdevs_discovered": 1, 00:09:07.547 "num_base_bdevs_operational": 3, 00:09:07.547 "base_bdevs_list": [ 00:09:07.547 { 00:09:07.547 "name": "BaseBdev1", 00:09:07.547 "uuid": "56ca30bb-2521-4ec1-9f20-8d89d82bf6f0", 00:09:07.547 "is_configured": true, 00:09:07.547 "data_offset": 2048, 00:09:07.547 "data_size": 63488 00:09:07.547 }, 00:09:07.547 { 00:09:07.547 "name": "BaseBdev2", 00:09:07.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.547 "is_configured": false, 00:09:07.547 "data_offset": 0, 00:09:07.547 "data_size": 0 00:09:07.547 }, 00:09:07.547 { 00:09:07.547 "name": "BaseBdev3", 00:09:07.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.547 "is_configured": false, 00:09:07.547 "data_offset": 0, 00:09:07.547 "data_size": 0 00:09:07.547 } 00:09:07.547 ] 00:09:07.547 }' 00:09:07.547 15:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.547 15:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.808 [2024-11-25 15:36:06.407897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.808 BaseBdev2 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.808 [ 00:09:07.808 { 00:09:07.808 "name": "BaseBdev2", 00:09:07.808 "aliases": [ 00:09:07.808 "3f829084-9479-4404-b454-01dd6de0c4cb" 00:09:07.808 ], 00:09:07.808 "product_name": "Malloc disk", 00:09:07.808 "block_size": 512, 00:09:07.808 "num_blocks": 65536, 00:09:07.808 "uuid": "3f829084-9479-4404-b454-01dd6de0c4cb", 00:09:07.808 "assigned_rate_limits": { 00:09:07.808 "rw_ios_per_sec": 0, 00:09:07.808 "rw_mbytes_per_sec": 0, 00:09:07.808 "r_mbytes_per_sec": 0, 00:09:07.808 "w_mbytes_per_sec": 0 00:09:07.808 }, 00:09:07.808 "claimed": true, 00:09:07.808 "claim_type": "exclusive_write", 00:09:07.808 "zoned": false, 00:09:07.808 "supported_io_types": { 00:09:07.808 "read": true, 00:09:07.808 "write": true, 00:09:07.808 "unmap": true, 00:09:07.808 "flush": true, 00:09:07.808 "reset": true, 00:09:07.808 "nvme_admin": false, 00:09:07.808 "nvme_io": false, 00:09:07.808 "nvme_io_md": false, 00:09:07.808 "write_zeroes": true, 00:09:07.808 "zcopy": true, 00:09:07.808 "get_zone_info": false, 00:09:07.808 "zone_management": false, 00:09:07.808 "zone_append": false, 00:09:07.808 "compare": false, 00:09:07.808 "compare_and_write": false, 00:09:07.808 "abort": true, 00:09:07.808 "seek_hole": false, 00:09:07.808 "seek_data": false, 00:09:07.808 "copy": true, 00:09:07.808 "nvme_iov_md": false 00:09:07.808 }, 00:09:07.808 "memory_domains": [ 00:09:07.808 { 00:09:07.808 "dma_device_id": "system", 00:09:07.808 "dma_device_type": 1 00:09:07.808 }, 00:09:07.808 { 00:09:07.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.808 "dma_device_type": 2 00:09:07.808 } 00:09:07.808 ], 00:09:07.808 "driver_specific": {} 00:09:07.808 } 00:09:07.808 ] 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.808 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.068 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.068 "name": "Existed_Raid", 00:09:08.068 "uuid": "1d37aa22-b397-400b-b23a-d9b3b31fb880", 00:09:08.068 "strip_size_kb": 64, 00:09:08.068 "state": "configuring", 00:09:08.068 "raid_level": "raid0", 00:09:08.068 "superblock": true, 00:09:08.068 "num_base_bdevs": 3, 00:09:08.068 "num_base_bdevs_discovered": 2, 00:09:08.068 "num_base_bdevs_operational": 3, 00:09:08.068 "base_bdevs_list": [ 00:09:08.068 { 00:09:08.068 "name": "BaseBdev1", 00:09:08.068 "uuid": "56ca30bb-2521-4ec1-9f20-8d89d82bf6f0", 00:09:08.068 "is_configured": true, 00:09:08.068 "data_offset": 2048, 00:09:08.068 "data_size": 63488 00:09:08.068 }, 00:09:08.068 { 00:09:08.068 "name": "BaseBdev2", 00:09:08.068 "uuid": "3f829084-9479-4404-b454-01dd6de0c4cb", 00:09:08.068 "is_configured": true, 00:09:08.068 "data_offset": 2048, 00:09:08.068 "data_size": 63488 00:09:08.068 }, 00:09:08.068 { 00:09:08.068 "name": "BaseBdev3", 00:09:08.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.068 "is_configured": false, 00:09:08.068 "data_offset": 0, 00:09:08.068 "data_size": 0 00:09:08.068 } 00:09:08.068 ] 00:09:08.068 }' 00:09:08.068 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.068 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.328 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:08.328 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.328 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.328 [2024-11-25 15:36:06.882798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.328 BaseBdev3 00:09:08.328 [2024-11-25 15:36:06.883167] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:08.328 [2024-11-25 15:36:06.883198] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:08.328 [2024-11-25 15:36:06.883708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:08.328 [2024-11-25 15:36:06.883875] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:08.328 [2024-11-25 15:36:06.883885] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:08.328 [2024-11-25 15:36:06.884068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.328 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.328 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:08.328 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:08.328 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.328 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:08.328 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.328 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.328 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:08.328 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.328 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.328 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.328 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:08.328 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.328 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.328 [ 00:09:08.328 { 00:09:08.328 "name": "BaseBdev3", 00:09:08.328 "aliases": [ 00:09:08.328 "aff7d6d9-0825-4c22-91f3-26f758002102" 00:09:08.328 ], 00:09:08.328 "product_name": "Malloc disk", 00:09:08.328 "block_size": 512, 00:09:08.328 "num_blocks": 65536, 00:09:08.328 "uuid": "aff7d6d9-0825-4c22-91f3-26f758002102", 00:09:08.328 "assigned_rate_limits": { 00:09:08.328 "rw_ios_per_sec": 0, 00:09:08.328 "rw_mbytes_per_sec": 0, 00:09:08.328 "r_mbytes_per_sec": 0, 00:09:08.328 "w_mbytes_per_sec": 0 00:09:08.328 }, 00:09:08.328 "claimed": true, 00:09:08.328 "claim_type": "exclusive_write", 00:09:08.328 "zoned": false, 00:09:08.328 "supported_io_types": { 00:09:08.328 "read": true, 00:09:08.328 "write": true, 00:09:08.328 "unmap": true, 00:09:08.328 "flush": true, 00:09:08.328 "reset": true, 00:09:08.328 "nvme_admin": false, 00:09:08.328 "nvme_io": false, 00:09:08.328 "nvme_io_md": false, 00:09:08.328 "write_zeroes": true, 00:09:08.329 "zcopy": true, 00:09:08.329 "get_zone_info": false, 00:09:08.329 "zone_management": false, 00:09:08.329 "zone_append": false, 00:09:08.329 "compare": false, 00:09:08.329 "compare_and_write": false, 00:09:08.329 "abort": true, 00:09:08.329 "seek_hole": false, 00:09:08.329 "seek_data": false, 00:09:08.329 "copy": true, 00:09:08.329 "nvme_iov_md": false 00:09:08.329 }, 00:09:08.329 "memory_domains": [ 00:09:08.329 { 00:09:08.329 "dma_device_id": "system", 00:09:08.329 "dma_device_type": 1 00:09:08.329 }, 00:09:08.329 { 00:09:08.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.329 "dma_device_type": 2 00:09:08.329 } 00:09:08.329 ], 00:09:08.329 "driver_specific": {} 00:09:08.329 } 00:09:08.329 ] 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.329 "name": "Existed_Raid", 00:09:08.329 "uuid": "1d37aa22-b397-400b-b23a-d9b3b31fb880", 00:09:08.329 "strip_size_kb": 64, 00:09:08.329 "state": "online", 00:09:08.329 "raid_level": "raid0", 00:09:08.329 "superblock": true, 00:09:08.329 "num_base_bdevs": 3, 00:09:08.329 "num_base_bdevs_discovered": 3, 00:09:08.329 "num_base_bdevs_operational": 3, 00:09:08.329 "base_bdevs_list": [ 00:09:08.329 { 00:09:08.329 "name": "BaseBdev1", 00:09:08.329 "uuid": "56ca30bb-2521-4ec1-9f20-8d89d82bf6f0", 00:09:08.329 "is_configured": true, 00:09:08.329 "data_offset": 2048, 00:09:08.329 "data_size": 63488 00:09:08.329 }, 00:09:08.329 { 00:09:08.329 "name": "BaseBdev2", 00:09:08.329 "uuid": "3f829084-9479-4404-b454-01dd6de0c4cb", 00:09:08.329 "is_configured": true, 00:09:08.329 "data_offset": 2048, 00:09:08.329 "data_size": 63488 00:09:08.329 }, 00:09:08.329 { 00:09:08.329 "name": "BaseBdev3", 00:09:08.329 "uuid": "aff7d6d9-0825-4c22-91f3-26f758002102", 00:09:08.329 "is_configured": true, 00:09:08.329 "data_offset": 2048, 00:09:08.329 "data_size": 63488 00:09:08.329 } 00:09:08.329 ] 00:09:08.329 }' 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.329 15:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.899 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:08.899 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:08.899 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:08.899 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:08.899 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:08.899 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:08.899 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:08.899 15:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.899 15:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.899 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:08.899 [2024-11-25 15:36:07.386441] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.899 15:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.899 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:08.899 "name": "Existed_Raid", 00:09:08.899 "aliases": [ 00:09:08.899 "1d37aa22-b397-400b-b23a-d9b3b31fb880" 00:09:08.899 ], 00:09:08.899 "product_name": "Raid Volume", 00:09:08.899 "block_size": 512, 00:09:08.899 "num_blocks": 190464, 00:09:08.899 "uuid": "1d37aa22-b397-400b-b23a-d9b3b31fb880", 00:09:08.899 "assigned_rate_limits": { 00:09:08.899 "rw_ios_per_sec": 0, 00:09:08.899 "rw_mbytes_per_sec": 0, 00:09:08.899 "r_mbytes_per_sec": 0, 00:09:08.899 "w_mbytes_per_sec": 0 00:09:08.899 }, 00:09:08.899 "claimed": false, 00:09:08.899 "zoned": false, 00:09:08.899 "supported_io_types": { 00:09:08.899 "read": true, 00:09:08.899 "write": true, 00:09:08.899 "unmap": true, 00:09:08.899 "flush": true, 00:09:08.899 "reset": true, 00:09:08.899 "nvme_admin": false, 00:09:08.899 "nvme_io": false, 00:09:08.899 "nvme_io_md": false, 00:09:08.899 "write_zeroes": true, 00:09:08.899 "zcopy": false, 00:09:08.899 "get_zone_info": false, 00:09:08.899 "zone_management": false, 00:09:08.899 "zone_append": false, 00:09:08.899 "compare": false, 00:09:08.899 "compare_and_write": false, 00:09:08.899 "abort": false, 00:09:08.899 "seek_hole": false, 00:09:08.899 "seek_data": false, 00:09:08.899 "copy": false, 00:09:08.899 "nvme_iov_md": false 00:09:08.899 }, 00:09:08.899 "memory_domains": [ 00:09:08.899 { 00:09:08.899 "dma_device_id": "system", 00:09:08.899 "dma_device_type": 1 00:09:08.899 }, 00:09:08.899 { 00:09:08.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.899 "dma_device_type": 2 00:09:08.899 }, 00:09:08.899 { 00:09:08.899 "dma_device_id": "system", 00:09:08.899 "dma_device_type": 1 00:09:08.899 }, 00:09:08.899 { 00:09:08.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.899 "dma_device_type": 2 00:09:08.899 }, 00:09:08.899 { 00:09:08.899 "dma_device_id": "system", 00:09:08.899 "dma_device_type": 1 00:09:08.899 }, 00:09:08.899 { 00:09:08.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.899 "dma_device_type": 2 00:09:08.899 } 00:09:08.899 ], 00:09:08.899 "driver_specific": { 00:09:08.899 "raid": { 00:09:08.899 "uuid": "1d37aa22-b397-400b-b23a-d9b3b31fb880", 00:09:08.899 "strip_size_kb": 64, 00:09:08.899 "state": "online", 00:09:08.899 "raid_level": "raid0", 00:09:08.899 "superblock": true, 00:09:08.899 "num_base_bdevs": 3, 00:09:08.899 "num_base_bdevs_discovered": 3, 00:09:08.899 "num_base_bdevs_operational": 3, 00:09:08.899 "base_bdevs_list": [ 00:09:08.899 { 00:09:08.900 "name": "BaseBdev1", 00:09:08.900 "uuid": "56ca30bb-2521-4ec1-9f20-8d89d82bf6f0", 00:09:08.900 "is_configured": true, 00:09:08.900 "data_offset": 2048, 00:09:08.900 "data_size": 63488 00:09:08.900 }, 00:09:08.900 { 00:09:08.900 "name": "BaseBdev2", 00:09:08.900 "uuid": "3f829084-9479-4404-b454-01dd6de0c4cb", 00:09:08.900 "is_configured": true, 00:09:08.900 "data_offset": 2048, 00:09:08.900 "data_size": 63488 00:09:08.900 }, 00:09:08.900 { 00:09:08.900 "name": "BaseBdev3", 00:09:08.900 "uuid": "aff7d6d9-0825-4c22-91f3-26f758002102", 00:09:08.900 "is_configured": true, 00:09:08.900 "data_offset": 2048, 00:09:08.900 "data_size": 63488 00:09:08.900 } 00:09:08.900 ] 00:09:08.900 } 00:09:08.900 } 00:09:08.900 }' 00:09:08.900 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.900 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:08.900 BaseBdev2 00:09:08.900 BaseBdev3' 00:09:08.900 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.900 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:08.900 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.900 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:08.900 15:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.900 15:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.900 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.900 15:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.900 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.900 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.900 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.900 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:08.900 15:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.900 15:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.900 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.900 15:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.160 [2024-11-25 15:36:07.649615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:09.160 [2024-11-25 15:36:07.649705] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.160 [2024-11-25 15:36:07.649770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.160 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.160 "name": "Existed_Raid", 00:09:09.160 "uuid": "1d37aa22-b397-400b-b23a-d9b3b31fb880", 00:09:09.160 "strip_size_kb": 64, 00:09:09.160 "state": "offline", 00:09:09.160 "raid_level": "raid0", 00:09:09.160 "superblock": true, 00:09:09.160 "num_base_bdevs": 3, 00:09:09.160 "num_base_bdevs_discovered": 2, 00:09:09.160 "num_base_bdevs_operational": 2, 00:09:09.160 "base_bdevs_list": [ 00:09:09.160 { 00:09:09.160 "name": null, 00:09:09.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.160 "is_configured": false, 00:09:09.160 "data_offset": 0, 00:09:09.160 "data_size": 63488 00:09:09.160 }, 00:09:09.160 { 00:09:09.160 "name": "BaseBdev2", 00:09:09.160 "uuid": "3f829084-9479-4404-b454-01dd6de0c4cb", 00:09:09.160 "is_configured": true, 00:09:09.160 "data_offset": 2048, 00:09:09.160 "data_size": 63488 00:09:09.160 }, 00:09:09.160 { 00:09:09.160 "name": "BaseBdev3", 00:09:09.161 "uuid": "aff7d6d9-0825-4c22-91f3-26f758002102", 00:09:09.161 "is_configured": true, 00:09:09.161 "data_offset": 2048, 00:09:09.161 "data_size": 63488 00:09:09.161 } 00:09:09.161 ] 00:09:09.161 }' 00:09:09.161 15:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.161 15:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.730 [2024-11-25 15:36:08.270650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:09.730 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.990 [2024-11-25 15:36:08.432658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:09.990 [2024-11-25 15:36:08.432830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.990 BaseBdev2 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.990 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.990 [ 00:09:09.990 { 00:09:09.990 "name": "BaseBdev2", 00:09:09.990 "aliases": [ 00:09:09.990 "2ab3153c-efaf-4d17-b778-197ad43a2e8f" 00:09:09.990 ], 00:09:09.990 "product_name": "Malloc disk", 00:09:09.990 "block_size": 512, 00:09:09.990 "num_blocks": 65536, 00:09:09.990 "uuid": "2ab3153c-efaf-4d17-b778-197ad43a2e8f", 00:09:09.990 "assigned_rate_limits": { 00:09:09.990 "rw_ios_per_sec": 0, 00:09:09.990 "rw_mbytes_per_sec": 0, 00:09:09.990 "r_mbytes_per_sec": 0, 00:09:09.990 "w_mbytes_per_sec": 0 00:09:09.990 }, 00:09:09.990 "claimed": false, 00:09:10.250 "zoned": false, 00:09:10.251 "supported_io_types": { 00:09:10.251 "read": true, 00:09:10.251 "write": true, 00:09:10.251 "unmap": true, 00:09:10.251 "flush": true, 00:09:10.251 "reset": true, 00:09:10.251 "nvme_admin": false, 00:09:10.251 "nvme_io": false, 00:09:10.251 "nvme_io_md": false, 00:09:10.251 "write_zeroes": true, 00:09:10.251 "zcopy": true, 00:09:10.251 "get_zone_info": false, 00:09:10.251 "zone_management": false, 00:09:10.251 "zone_append": false, 00:09:10.251 "compare": false, 00:09:10.251 "compare_and_write": false, 00:09:10.251 "abort": true, 00:09:10.251 "seek_hole": false, 00:09:10.251 "seek_data": false, 00:09:10.251 "copy": true, 00:09:10.251 "nvme_iov_md": false 00:09:10.251 }, 00:09:10.251 "memory_domains": [ 00:09:10.251 { 00:09:10.251 "dma_device_id": "system", 00:09:10.251 "dma_device_type": 1 00:09:10.251 }, 00:09:10.251 { 00:09:10.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.251 "dma_device_type": 2 00:09:10.251 } 00:09:10.251 ], 00:09:10.251 "driver_specific": {} 00:09:10.251 } 00:09:10.251 ] 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.251 BaseBdev3 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.251 [ 00:09:10.251 { 00:09:10.251 "name": "BaseBdev3", 00:09:10.251 "aliases": [ 00:09:10.251 "7cfc881a-3903-4d21-927f-e72d31195024" 00:09:10.251 ], 00:09:10.251 "product_name": "Malloc disk", 00:09:10.251 "block_size": 512, 00:09:10.251 "num_blocks": 65536, 00:09:10.251 "uuid": "7cfc881a-3903-4d21-927f-e72d31195024", 00:09:10.251 "assigned_rate_limits": { 00:09:10.251 "rw_ios_per_sec": 0, 00:09:10.251 "rw_mbytes_per_sec": 0, 00:09:10.251 "r_mbytes_per_sec": 0, 00:09:10.251 "w_mbytes_per_sec": 0 00:09:10.251 }, 00:09:10.251 "claimed": false, 00:09:10.251 "zoned": false, 00:09:10.251 "supported_io_types": { 00:09:10.251 "read": true, 00:09:10.251 "write": true, 00:09:10.251 "unmap": true, 00:09:10.251 "flush": true, 00:09:10.251 "reset": true, 00:09:10.251 "nvme_admin": false, 00:09:10.251 "nvme_io": false, 00:09:10.251 "nvme_io_md": false, 00:09:10.251 "write_zeroes": true, 00:09:10.251 "zcopy": true, 00:09:10.251 "get_zone_info": false, 00:09:10.251 "zone_management": false, 00:09:10.251 "zone_append": false, 00:09:10.251 "compare": false, 00:09:10.251 "compare_and_write": false, 00:09:10.251 "abort": true, 00:09:10.251 "seek_hole": false, 00:09:10.251 "seek_data": false, 00:09:10.251 "copy": true, 00:09:10.251 "nvme_iov_md": false 00:09:10.251 }, 00:09:10.251 "memory_domains": [ 00:09:10.251 { 00:09:10.251 "dma_device_id": "system", 00:09:10.251 "dma_device_type": 1 00:09:10.251 }, 00:09:10.251 { 00:09:10.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.251 "dma_device_type": 2 00:09:10.251 } 00:09:10.251 ], 00:09:10.251 "driver_specific": {} 00:09:10.251 } 00:09:10.251 ] 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.251 [2024-11-25 15:36:08.770086] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:10.251 [2024-11-25 15:36:08.770219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:10.251 [2024-11-25 15:36:08.770250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.251 [2024-11-25 15:36:08.772377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.251 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.252 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.252 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.252 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.252 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.252 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.252 "name": "Existed_Raid", 00:09:10.252 "uuid": "26fa5079-8b3e-4d25-8c21-7fc48e87a51e", 00:09:10.252 "strip_size_kb": 64, 00:09:10.252 "state": "configuring", 00:09:10.252 "raid_level": "raid0", 00:09:10.252 "superblock": true, 00:09:10.252 "num_base_bdevs": 3, 00:09:10.252 "num_base_bdevs_discovered": 2, 00:09:10.252 "num_base_bdevs_operational": 3, 00:09:10.252 "base_bdevs_list": [ 00:09:10.252 { 00:09:10.252 "name": "BaseBdev1", 00:09:10.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.252 "is_configured": false, 00:09:10.252 "data_offset": 0, 00:09:10.252 "data_size": 0 00:09:10.252 }, 00:09:10.252 { 00:09:10.252 "name": "BaseBdev2", 00:09:10.252 "uuid": "2ab3153c-efaf-4d17-b778-197ad43a2e8f", 00:09:10.252 "is_configured": true, 00:09:10.252 "data_offset": 2048, 00:09:10.252 "data_size": 63488 00:09:10.252 }, 00:09:10.252 { 00:09:10.252 "name": "BaseBdev3", 00:09:10.252 "uuid": "7cfc881a-3903-4d21-927f-e72d31195024", 00:09:10.252 "is_configured": true, 00:09:10.252 "data_offset": 2048, 00:09:10.252 "data_size": 63488 00:09:10.252 } 00:09:10.252 ] 00:09:10.252 }' 00:09:10.252 15:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.252 15:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.820 [2024-11-25 15:36:09.253248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.820 "name": "Existed_Raid", 00:09:10.820 "uuid": "26fa5079-8b3e-4d25-8c21-7fc48e87a51e", 00:09:10.820 "strip_size_kb": 64, 00:09:10.820 "state": "configuring", 00:09:10.820 "raid_level": "raid0", 00:09:10.820 "superblock": true, 00:09:10.820 "num_base_bdevs": 3, 00:09:10.820 "num_base_bdevs_discovered": 1, 00:09:10.820 "num_base_bdevs_operational": 3, 00:09:10.820 "base_bdevs_list": [ 00:09:10.820 { 00:09:10.820 "name": "BaseBdev1", 00:09:10.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.820 "is_configured": false, 00:09:10.820 "data_offset": 0, 00:09:10.820 "data_size": 0 00:09:10.820 }, 00:09:10.820 { 00:09:10.820 "name": null, 00:09:10.820 "uuid": "2ab3153c-efaf-4d17-b778-197ad43a2e8f", 00:09:10.820 "is_configured": false, 00:09:10.820 "data_offset": 0, 00:09:10.820 "data_size": 63488 00:09:10.820 }, 00:09:10.820 { 00:09:10.820 "name": "BaseBdev3", 00:09:10.820 "uuid": "7cfc881a-3903-4d21-927f-e72d31195024", 00:09:10.820 "is_configured": true, 00:09:10.820 "data_offset": 2048, 00:09:10.820 "data_size": 63488 00:09:10.820 } 00:09:10.820 ] 00:09:10.820 }' 00:09:10.820 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.821 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.080 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.080 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.080 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:11.080 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.080 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.080 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:11.080 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:11.080 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.080 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.080 [2024-11-25 15:36:09.721573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:11.080 BaseBdev1 00:09:11.080 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.080 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:11.081 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:11.081 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.081 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:11.081 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.081 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.081 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:11.081 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.081 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.081 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.081 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:11.081 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.081 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.081 [ 00:09:11.081 { 00:09:11.081 "name": "BaseBdev1", 00:09:11.081 "aliases": [ 00:09:11.081 "c4633af7-c89c-40cb-8541-e7ea7ba7cce1" 00:09:11.081 ], 00:09:11.081 "product_name": "Malloc disk", 00:09:11.081 "block_size": 512, 00:09:11.081 "num_blocks": 65536, 00:09:11.081 "uuid": "c4633af7-c89c-40cb-8541-e7ea7ba7cce1", 00:09:11.081 "assigned_rate_limits": { 00:09:11.081 "rw_ios_per_sec": 0, 00:09:11.081 "rw_mbytes_per_sec": 0, 00:09:11.081 "r_mbytes_per_sec": 0, 00:09:11.081 "w_mbytes_per_sec": 0 00:09:11.081 }, 00:09:11.081 "claimed": true, 00:09:11.081 "claim_type": "exclusive_write", 00:09:11.081 "zoned": false, 00:09:11.081 "supported_io_types": { 00:09:11.081 "read": true, 00:09:11.081 "write": true, 00:09:11.081 "unmap": true, 00:09:11.081 "flush": true, 00:09:11.081 "reset": true, 00:09:11.081 "nvme_admin": false, 00:09:11.081 "nvme_io": false, 00:09:11.081 "nvme_io_md": false, 00:09:11.081 "write_zeroes": true, 00:09:11.081 "zcopy": true, 00:09:11.081 "get_zone_info": false, 00:09:11.081 "zone_management": false, 00:09:11.081 "zone_append": false, 00:09:11.081 "compare": false, 00:09:11.081 "compare_and_write": false, 00:09:11.081 "abort": true, 00:09:11.081 "seek_hole": false, 00:09:11.081 "seek_data": false, 00:09:11.081 "copy": true, 00:09:11.081 "nvme_iov_md": false 00:09:11.081 }, 00:09:11.081 "memory_domains": [ 00:09:11.081 { 00:09:11.081 "dma_device_id": "system", 00:09:11.081 "dma_device_type": 1 00:09:11.081 }, 00:09:11.081 { 00:09:11.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.081 "dma_device_type": 2 00:09:11.081 } 00:09:11.081 ], 00:09:11.081 "driver_specific": {} 00:09:11.081 } 00:09:11.081 ] 00:09:11.081 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.081 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:11.341 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.341 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.341 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.341 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.341 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.341 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.341 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.341 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.341 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.341 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.341 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.341 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.341 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.341 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.341 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.341 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.341 "name": "Existed_Raid", 00:09:11.341 "uuid": "26fa5079-8b3e-4d25-8c21-7fc48e87a51e", 00:09:11.342 "strip_size_kb": 64, 00:09:11.342 "state": "configuring", 00:09:11.342 "raid_level": "raid0", 00:09:11.342 "superblock": true, 00:09:11.342 "num_base_bdevs": 3, 00:09:11.342 "num_base_bdevs_discovered": 2, 00:09:11.342 "num_base_bdevs_operational": 3, 00:09:11.342 "base_bdevs_list": [ 00:09:11.342 { 00:09:11.342 "name": "BaseBdev1", 00:09:11.342 "uuid": "c4633af7-c89c-40cb-8541-e7ea7ba7cce1", 00:09:11.342 "is_configured": true, 00:09:11.342 "data_offset": 2048, 00:09:11.342 "data_size": 63488 00:09:11.342 }, 00:09:11.342 { 00:09:11.342 "name": null, 00:09:11.342 "uuid": "2ab3153c-efaf-4d17-b778-197ad43a2e8f", 00:09:11.342 "is_configured": false, 00:09:11.342 "data_offset": 0, 00:09:11.342 "data_size": 63488 00:09:11.342 }, 00:09:11.342 { 00:09:11.342 "name": "BaseBdev3", 00:09:11.342 "uuid": "7cfc881a-3903-4d21-927f-e72d31195024", 00:09:11.342 "is_configured": true, 00:09:11.342 "data_offset": 2048, 00:09:11.342 "data_size": 63488 00:09:11.342 } 00:09:11.342 ] 00:09:11.342 }' 00:09:11.342 15:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.342 15:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.601 [2024-11-25 15:36:10.144904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.601 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.602 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.602 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.602 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.602 "name": "Existed_Raid", 00:09:11.602 "uuid": "26fa5079-8b3e-4d25-8c21-7fc48e87a51e", 00:09:11.602 "strip_size_kb": 64, 00:09:11.602 "state": "configuring", 00:09:11.602 "raid_level": "raid0", 00:09:11.602 "superblock": true, 00:09:11.602 "num_base_bdevs": 3, 00:09:11.602 "num_base_bdevs_discovered": 1, 00:09:11.602 "num_base_bdevs_operational": 3, 00:09:11.602 "base_bdevs_list": [ 00:09:11.602 { 00:09:11.602 "name": "BaseBdev1", 00:09:11.602 "uuid": "c4633af7-c89c-40cb-8541-e7ea7ba7cce1", 00:09:11.602 "is_configured": true, 00:09:11.602 "data_offset": 2048, 00:09:11.602 "data_size": 63488 00:09:11.602 }, 00:09:11.602 { 00:09:11.602 "name": null, 00:09:11.602 "uuid": "2ab3153c-efaf-4d17-b778-197ad43a2e8f", 00:09:11.602 "is_configured": false, 00:09:11.602 "data_offset": 0, 00:09:11.602 "data_size": 63488 00:09:11.602 }, 00:09:11.602 { 00:09:11.602 "name": null, 00:09:11.602 "uuid": "7cfc881a-3903-4d21-927f-e72d31195024", 00:09:11.602 "is_configured": false, 00:09:11.602 "data_offset": 0, 00:09:11.602 "data_size": 63488 00:09:11.602 } 00:09:11.602 ] 00:09:11.602 }' 00:09:11.602 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.602 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.170 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:12.170 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.170 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.170 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.170 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.170 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:12.170 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:12.170 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.170 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.170 [2024-11-25 15:36:10.620228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.171 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.171 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:12.171 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.171 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.171 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.171 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.171 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.171 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.171 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.171 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.171 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.171 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.171 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.171 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.171 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.171 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.171 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.171 "name": "Existed_Raid", 00:09:12.171 "uuid": "26fa5079-8b3e-4d25-8c21-7fc48e87a51e", 00:09:12.171 "strip_size_kb": 64, 00:09:12.171 "state": "configuring", 00:09:12.171 "raid_level": "raid0", 00:09:12.171 "superblock": true, 00:09:12.171 "num_base_bdevs": 3, 00:09:12.171 "num_base_bdevs_discovered": 2, 00:09:12.171 "num_base_bdevs_operational": 3, 00:09:12.171 "base_bdevs_list": [ 00:09:12.171 { 00:09:12.171 "name": "BaseBdev1", 00:09:12.171 "uuid": "c4633af7-c89c-40cb-8541-e7ea7ba7cce1", 00:09:12.171 "is_configured": true, 00:09:12.171 "data_offset": 2048, 00:09:12.171 "data_size": 63488 00:09:12.171 }, 00:09:12.171 { 00:09:12.171 "name": null, 00:09:12.171 "uuid": "2ab3153c-efaf-4d17-b778-197ad43a2e8f", 00:09:12.171 "is_configured": false, 00:09:12.171 "data_offset": 0, 00:09:12.171 "data_size": 63488 00:09:12.171 }, 00:09:12.171 { 00:09:12.171 "name": "BaseBdev3", 00:09:12.171 "uuid": "7cfc881a-3903-4d21-927f-e72d31195024", 00:09:12.171 "is_configured": true, 00:09:12.171 "data_offset": 2048, 00:09:12.171 "data_size": 63488 00:09:12.171 } 00:09:12.171 ] 00:09:12.171 }' 00:09:12.171 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.171 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.431 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.431 15:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:12.431 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.431 15:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.431 15:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.431 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:12.431 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:12.431 15:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.431 15:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.431 [2024-11-25 15:36:11.051449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:12.690 15:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.690 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:12.690 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.690 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.690 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.691 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.691 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.691 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.691 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.691 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.691 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.691 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.691 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.691 15:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.691 15:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.691 15:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.691 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.691 "name": "Existed_Raid", 00:09:12.691 "uuid": "26fa5079-8b3e-4d25-8c21-7fc48e87a51e", 00:09:12.691 "strip_size_kb": 64, 00:09:12.691 "state": "configuring", 00:09:12.691 "raid_level": "raid0", 00:09:12.691 "superblock": true, 00:09:12.691 "num_base_bdevs": 3, 00:09:12.691 "num_base_bdevs_discovered": 1, 00:09:12.691 "num_base_bdevs_operational": 3, 00:09:12.691 "base_bdevs_list": [ 00:09:12.691 { 00:09:12.691 "name": null, 00:09:12.691 "uuid": "c4633af7-c89c-40cb-8541-e7ea7ba7cce1", 00:09:12.691 "is_configured": false, 00:09:12.691 "data_offset": 0, 00:09:12.691 "data_size": 63488 00:09:12.691 }, 00:09:12.691 { 00:09:12.691 "name": null, 00:09:12.691 "uuid": "2ab3153c-efaf-4d17-b778-197ad43a2e8f", 00:09:12.691 "is_configured": false, 00:09:12.691 "data_offset": 0, 00:09:12.691 "data_size": 63488 00:09:12.691 }, 00:09:12.691 { 00:09:12.691 "name": "BaseBdev3", 00:09:12.691 "uuid": "7cfc881a-3903-4d21-927f-e72d31195024", 00:09:12.691 "is_configured": true, 00:09:12.691 "data_offset": 2048, 00:09:12.691 "data_size": 63488 00:09:12.691 } 00:09:12.691 ] 00:09:12.691 }' 00:09:12.691 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.691 15:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.260 [2024-11-25 15:36:11.695974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.260 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.261 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.261 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.261 15:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.261 15:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.261 15:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.261 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.261 "name": "Existed_Raid", 00:09:13.261 "uuid": "26fa5079-8b3e-4d25-8c21-7fc48e87a51e", 00:09:13.261 "strip_size_kb": 64, 00:09:13.261 "state": "configuring", 00:09:13.261 "raid_level": "raid0", 00:09:13.261 "superblock": true, 00:09:13.261 "num_base_bdevs": 3, 00:09:13.261 "num_base_bdevs_discovered": 2, 00:09:13.261 "num_base_bdevs_operational": 3, 00:09:13.261 "base_bdevs_list": [ 00:09:13.261 { 00:09:13.261 "name": null, 00:09:13.261 "uuid": "c4633af7-c89c-40cb-8541-e7ea7ba7cce1", 00:09:13.261 "is_configured": false, 00:09:13.261 "data_offset": 0, 00:09:13.261 "data_size": 63488 00:09:13.261 }, 00:09:13.261 { 00:09:13.261 "name": "BaseBdev2", 00:09:13.261 "uuid": "2ab3153c-efaf-4d17-b778-197ad43a2e8f", 00:09:13.261 "is_configured": true, 00:09:13.261 "data_offset": 2048, 00:09:13.261 "data_size": 63488 00:09:13.261 }, 00:09:13.261 { 00:09:13.261 "name": "BaseBdev3", 00:09:13.261 "uuid": "7cfc881a-3903-4d21-927f-e72d31195024", 00:09:13.261 "is_configured": true, 00:09:13.261 "data_offset": 2048, 00:09:13.261 "data_size": 63488 00:09:13.261 } 00:09:13.261 ] 00:09:13.261 }' 00:09:13.261 15:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.261 15:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.520 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.520 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:13.520 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.520 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.520 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.520 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:13.521 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.521 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:13.521 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.521 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.785 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.785 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c4633af7-c89c-40cb-8541-e7ea7ba7cce1 00:09:13.785 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.785 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.785 [2024-11-25 15:36:12.284838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:13.785 [2024-11-25 15:36:12.285246] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:13.786 [2024-11-25 15:36:12.285289] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:13.786 [2024-11-25 15:36:12.285654] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:13.786 [2024-11-25 15:36:12.285847] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:13.786 [2024-11-25 15:36:12.285888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:13.786 NewBaseBdev 00:09:13.786 [2024-11-25 15:36:12.286077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.786 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.786 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:13.786 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:13.786 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.786 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:13.786 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.786 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.786 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.787 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.787 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.787 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.787 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:13.787 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.787 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.787 [ 00:09:13.787 { 00:09:13.787 "name": "NewBaseBdev", 00:09:13.787 "aliases": [ 00:09:13.787 "c4633af7-c89c-40cb-8541-e7ea7ba7cce1" 00:09:13.787 ], 00:09:13.787 "product_name": "Malloc disk", 00:09:13.787 "block_size": 512, 00:09:13.787 "num_blocks": 65536, 00:09:13.787 "uuid": "c4633af7-c89c-40cb-8541-e7ea7ba7cce1", 00:09:13.787 "assigned_rate_limits": { 00:09:13.787 "rw_ios_per_sec": 0, 00:09:13.787 "rw_mbytes_per_sec": 0, 00:09:13.787 "r_mbytes_per_sec": 0, 00:09:13.787 "w_mbytes_per_sec": 0 00:09:13.787 }, 00:09:13.788 "claimed": true, 00:09:13.788 "claim_type": "exclusive_write", 00:09:13.788 "zoned": false, 00:09:13.788 "supported_io_types": { 00:09:13.788 "read": true, 00:09:13.788 "write": true, 00:09:13.788 "unmap": true, 00:09:13.788 "flush": true, 00:09:13.788 "reset": true, 00:09:13.788 "nvme_admin": false, 00:09:13.788 "nvme_io": false, 00:09:13.788 "nvme_io_md": false, 00:09:13.788 "write_zeroes": true, 00:09:13.788 "zcopy": true, 00:09:13.788 "get_zone_info": false, 00:09:13.788 "zone_management": false, 00:09:13.788 "zone_append": false, 00:09:13.788 "compare": false, 00:09:13.788 "compare_and_write": false, 00:09:13.788 "abort": true, 00:09:13.788 "seek_hole": false, 00:09:13.788 "seek_data": false, 00:09:13.788 "copy": true, 00:09:13.788 "nvme_iov_md": false 00:09:13.788 }, 00:09:13.788 "memory_domains": [ 00:09:13.788 { 00:09:13.788 "dma_device_id": "system", 00:09:13.788 "dma_device_type": 1 00:09:13.788 }, 00:09:13.788 { 00:09:13.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.788 "dma_device_type": 2 00:09:13.788 } 00:09:13.788 ], 00:09:13.789 "driver_specific": {} 00:09:13.789 } 00:09:13.789 ] 00:09:13.789 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.789 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:13.789 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:13.789 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.789 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.789 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.789 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.789 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.789 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.789 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.789 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.789 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.789 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.789 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.789 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.789 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.789 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.789 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.790 "name": "Existed_Raid", 00:09:13.790 "uuid": "26fa5079-8b3e-4d25-8c21-7fc48e87a51e", 00:09:13.790 "strip_size_kb": 64, 00:09:13.790 "state": "online", 00:09:13.790 "raid_level": "raid0", 00:09:13.790 "superblock": true, 00:09:13.790 "num_base_bdevs": 3, 00:09:13.790 "num_base_bdevs_discovered": 3, 00:09:13.790 "num_base_bdevs_operational": 3, 00:09:13.790 "base_bdevs_list": [ 00:09:13.790 { 00:09:13.790 "name": "NewBaseBdev", 00:09:13.790 "uuid": "c4633af7-c89c-40cb-8541-e7ea7ba7cce1", 00:09:13.790 "is_configured": true, 00:09:13.790 "data_offset": 2048, 00:09:13.790 "data_size": 63488 00:09:13.790 }, 00:09:13.790 { 00:09:13.790 "name": "BaseBdev2", 00:09:13.790 "uuid": "2ab3153c-efaf-4d17-b778-197ad43a2e8f", 00:09:13.790 "is_configured": true, 00:09:13.790 "data_offset": 2048, 00:09:13.790 "data_size": 63488 00:09:13.790 }, 00:09:13.790 { 00:09:13.790 "name": "BaseBdev3", 00:09:13.790 "uuid": "7cfc881a-3903-4d21-927f-e72d31195024", 00:09:13.790 "is_configured": true, 00:09:13.790 "data_offset": 2048, 00:09:13.791 "data_size": 63488 00:09:13.791 } 00:09:13.791 ] 00:09:13.791 }' 00:09:13.791 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.791 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.365 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:14.365 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:14.365 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:14.365 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:14.365 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:14.365 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:14.365 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:14.365 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.365 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.365 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:14.365 [2024-11-25 15:36:12.792386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.365 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.365 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:14.365 "name": "Existed_Raid", 00:09:14.365 "aliases": [ 00:09:14.365 "26fa5079-8b3e-4d25-8c21-7fc48e87a51e" 00:09:14.365 ], 00:09:14.365 "product_name": "Raid Volume", 00:09:14.365 "block_size": 512, 00:09:14.365 "num_blocks": 190464, 00:09:14.365 "uuid": "26fa5079-8b3e-4d25-8c21-7fc48e87a51e", 00:09:14.365 "assigned_rate_limits": { 00:09:14.365 "rw_ios_per_sec": 0, 00:09:14.365 "rw_mbytes_per_sec": 0, 00:09:14.365 "r_mbytes_per_sec": 0, 00:09:14.365 "w_mbytes_per_sec": 0 00:09:14.365 }, 00:09:14.365 "claimed": false, 00:09:14.365 "zoned": false, 00:09:14.365 "supported_io_types": { 00:09:14.365 "read": true, 00:09:14.365 "write": true, 00:09:14.365 "unmap": true, 00:09:14.365 "flush": true, 00:09:14.365 "reset": true, 00:09:14.365 "nvme_admin": false, 00:09:14.365 "nvme_io": false, 00:09:14.365 "nvme_io_md": false, 00:09:14.365 "write_zeroes": true, 00:09:14.365 "zcopy": false, 00:09:14.365 "get_zone_info": false, 00:09:14.365 "zone_management": false, 00:09:14.365 "zone_append": false, 00:09:14.365 "compare": false, 00:09:14.365 "compare_and_write": false, 00:09:14.365 "abort": false, 00:09:14.365 "seek_hole": false, 00:09:14.365 "seek_data": false, 00:09:14.365 "copy": false, 00:09:14.365 "nvme_iov_md": false 00:09:14.365 }, 00:09:14.365 "memory_domains": [ 00:09:14.365 { 00:09:14.365 "dma_device_id": "system", 00:09:14.365 "dma_device_type": 1 00:09:14.365 }, 00:09:14.365 { 00:09:14.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.365 "dma_device_type": 2 00:09:14.365 }, 00:09:14.365 { 00:09:14.365 "dma_device_id": "system", 00:09:14.365 "dma_device_type": 1 00:09:14.365 }, 00:09:14.365 { 00:09:14.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.365 "dma_device_type": 2 00:09:14.365 }, 00:09:14.365 { 00:09:14.365 "dma_device_id": "system", 00:09:14.365 "dma_device_type": 1 00:09:14.365 }, 00:09:14.365 { 00:09:14.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.365 "dma_device_type": 2 00:09:14.365 } 00:09:14.365 ], 00:09:14.365 "driver_specific": { 00:09:14.365 "raid": { 00:09:14.365 "uuid": "26fa5079-8b3e-4d25-8c21-7fc48e87a51e", 00:09:14.365 "strip_size_kb": 64, 00:09:14.365 "state": "online", 00:09:14.365 "raid_level": "raid0", 00:09:14.365 "superblock": true, 00:09:14.365 "num_base_bdevs": 3, 00:09:14.365 "num_base_bdevs_discovered": 3, 00:09:14.365 "num_base_bdevs_operational": 3, 00:09:14.365 "base_bdevs_list": [ 00:09:14.365 { 00:09:14.365 "name": "NewBaseBdev", 00:09:14.365 "uuid": "c4633af7-c89c-40cb-8541-e7ea7ba7cce1", 00:09:14.365 "is_configured": true, 00:09:14.365 "data_offset": 2048, 00:09:14.365 "data_size": 63488 00:09:14.365 }, 00:09:14.365 { 00:09:14.365 "name": "BaseBdev2", 00:09:14.366 "uuid": "2ab3153c-efaf-4d17-b778-197ad43a2e8f", 00:09:14.366 "is_configured": true, 00:09:14.366 "data_offset": 2048, 00:09:14.366 "data_size": 63488 00:09:14.366 }, 00:09:14.366 { 00:09:14.366 "name": "BaseBdev3", 00:09:14.366 "uuid": "7cfc881a-3903-4d21-927f-e72d31195024", 00:09:14.366 "is_configured": true, 00:09:14.366 "data_offset": 2048, 00:09:14.366 "data_size": 63488 00:09:14.366 } 00:09:14.366 ] 00:09:14.366 } 00:09:14.366 } 00:09:14.366 }' 00:09:14.366 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:14.366 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:14.366 BaseBdev2 00:09:14.366 BaseBdev3' 00:09:14.366 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.366 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:14.366 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.366 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.366 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:14.366 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.366 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.366 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.366 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.366 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.366 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.366 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:14.366 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.366 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.366 15:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.366 15:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.366 15:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.366 15:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.366 15:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.366 15:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:14.366 15:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.366 15:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.366 15:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.366 15:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.366 15:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.625 15:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.625 15:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.625 15:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.625 15:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.625 [2024-11-25 15:36:13.051669] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.625 [2024-11-25 15:36:13.051731] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:14.625 [2024-11-25 15:36:13.051848] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:14.625 [2024-11-25 15:36:13.051913] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:14.625 [2024-11-25 15:36:13.051929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:14.625 15:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.625 15:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64224 00:09:14.625 15:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64224 ']' 00:09:14.625 15:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64224 00:09:14.625 15:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:14.625 15:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.625 15:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64224 00:09:14.625 killing process with pid 64224 00:09:14.625 15:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.625 15:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.625 15:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64224' 00:09:14.625 15:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64224 00:09:14.625 [2024-11-25 15:36:13.085849] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:14.625 15:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64224 00:09:14.895 [2024-11-25 15:36:13.418336] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.284 15:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:16.284 00:09:16.284 real 0m10.653s 00:09:16.284 user 0m16.698s 00:09:16.284 sys 0m1.856s 00:09:16.284 15:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.284 15:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.284 ************************************ 00:09:16.284 END TEST raid_state_function_test_sb 00:09:16.284 ************************************ 00:09:16.284 15:36:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:16.284 15:36:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:16.284 15:36:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.284 15:36:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.284 ************************************ 00:09:16.284 START TEST raid_superblock_test 00:09:16.284 ************************************ 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64844 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64844 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64844 ']' 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.284 15:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.284 [2024-11-25 15:36:14.785962] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:09:16.284 [2024-11-25 15:36:14.786179] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64844 ] 00:09:16.284 [2024-11-25 15:36:14.956004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.544 [2024-11-25 15:36:15.092244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.804 [2024-11-25 15:36:15.330809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.804 [2024-11-25 15:36:15.330856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.064 malloc1 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.064 [2024-11-25 15:36:15.652970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:17.064 [2024-11-25 15:36:15.653146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.064 [2024-11-25 15:36:15.653193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:17.064 [2024-11-25 15:36:15.653224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.064 [2024-11-25 15:36:15.655752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.064 [2024-11-25 15:36:15.655837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:17.064 pt1 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.064 malloc2 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.064 [2024-11-25 15:36:15.716812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:17.064 [2024-11-25 15:36:15.716963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.064 [2024-11-25 15:36:15.717017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:17.064 [2024-11-25 15:36:15.717057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.064 [2024-11-25 15:36:15.719527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.064 [2024-11-25 15:36:15.719601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:17.064 pt2 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.064 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.324 malloc3 00:09:17.324 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.324 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.325 [2024-11-25 15:36:15.785825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:17.325 [2024-11-25 15:36:15.785958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.325 [2024-11-25 15:36:15.785999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:17.325 [2024-11-25 15:36:15.786055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.325 [2024-11-25 15:36:15.788532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.325 [2024-11-25 15:36:15.788603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:17.325 pt3 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.325 [2024-11-25 15:36:15.797865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:17.325 [2024-11-25 15:36:15.799989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:17.325 [2024-11-25 15:36:15.800129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:17.325 [2024-11-25 15:36:15.800325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:17.325 [2024-11-25 15:36:15.800377] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:17.325 [2024-11-25 15:36:15.800651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:17.325 [2024-11-25 15:36:15.800881] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:17.325 [2024-11-25 15:36:15.800924] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:17.325 [2024-11-25 15:36:15.801128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.325 "name": "raid_bdev1", 00:09:17.325 "uuid": "a4845243-2568-4877-ad0a-97142aa57cce", 00:09:17.325 "strip_size_kb": 64, 00:09:17.325 "state": "online", 00:09:17.325 "raid_level": "raid0", 00:09:17.325 "superblock": true, 00:09:17.325 "num_base_bdevs": 3, 00:09:17.325 "num_base_bdevs_discovered": 3, 00:09:17.325 "num_base_bdevs_operational": 3, 00:09:17.325 "base_bdevs_list": [ 00:09:17.325 { 00:09:17.325 "name": "pt1", 00:09:17.325 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.325 "is_configured": true, 00:09:17.325 "data_offset": 2048, 00:09:17.325 "data_size": 63488 00:09:17.325 }, 00:09:17.325 { 00:09:17.325 "name": "pt2", 00:09:17.325 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.325 "is_configured": true, 00:09:17.325 "data_offset": 2048, 00:09:17.325 "data_size": 63488 00:09:17.325 }, 00:09:17.325 { 00:09:17.325 "name": "pt3", 00:09:17.325 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.325 "is_configured": true, 00:09:17.325 "data_offset": 2048, 00:09:17.325 "data_size": 63488 00:09:17.325 } 00:09:17.325 ] 00:09:17.325 }' 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.325 15:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.585 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:17.585 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:17.585 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:17.585 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:17.585 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:17.585 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:17.585 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:17.585 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:17.585 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.585 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.585 [2024-11-25 15:36:16.253463] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:17.845 "name": "raid_bdev1", 00:09:17.845 "aliases": [ 00:09:17.845 "a4845243-2568-4877-ad0a-97142aa57cce" 00:09:17.845 ], 00:09:17.845 "product_name": "Raid Volume", 00:09:17.845 "block_size": 512, 00:09:17.845 "num_blocks": 190464, 00:09:17.845 "uuid": "a4845243-2568-4877-ad0a-97142aa57cce", 00:09:17.845 "assigned_rate_limits": { 00:09:17.845 "rw_ios_per_sec": 0, 00:09:17.845 "rw_mbytes_per_sec": 0, 00:09:17.845 "r_mbytes_per_sec": 0, 00:09:17.845 "w_mbytes_per_sec": 0 00:09:17.845 }, 00:09:17.845 "claimed": false, 00:09:17.845 "zoned": false, 00:09:17.845 "supported_io_types": { 00:09:17.845 "read": true, 00:09:17.845 "write": true, 00:09:17.845 "unmap": true, 00:09:17.845 "flush": true, 00:09:17.845 "reset": true, 00:09:17.845 "nvme_admin": false, 00:09:17.845 "nvme_io": false, 00:09:17.845 "nvme_io_md": false, 00:09:17.845 "write_zeroes": true, 00:09:17.845 "zcopy": false, 00:09:17.845 "get_zone_info": false, 00:09:17.845 "zone_management": false, 00:09:17.845 "zone_append": false, 00:09:17.845 "compare": false, 00:09:17.845 "compare_and_write": false, 00:09:17.845 "abort": false, 00:09:17.845 "seek_hole": false, 00:09:17.845 "seek_data": false, 00:09:17.845 "copy": false, 00:09:17.845 "nvme_iov_md": false 00:09:17.845 }, 00:09:17.845 "memory_domains": [ 00:09:17.845 { 00:09:17.845 "dma_device_id": "system", 00:09:17.845 "dma_device_type": 1 00:09:17.845 }, 00:09:17.845 { 00:09:17.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.845 "dma_device_type": 2 00:09:17.845 }, 00:09:17.845 { 00:09:17.845 "dma_device_id": "system", 00:09:17.845 "dma_device_type": 1 00:09:17.845 }, 00:09:17.845 { 00:09:17.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.845 "dma_device_type": 2 00:09:17.845 }, 00:09:17.845 { 00:09:17.845 "dma_device_id": "system", 00:09:17.845 "dma_device_type": 1 00:09:17.845 }, 00:09:17.845 { 00:09:17.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.845 "dma_device_type": 2 00:09:17.845 } 00:09:17.845 ], 00:09:17.845 "driver_specific": { 00:09:17.845 "raid": { 00:09:17.845 "uuid": "a4845243-2568-4877-ad0a-97142aa57cce", 00:09:17.845 "strip_size_kb": 64, 00:09:17.845 "state": "online", 00:09:17.845 "raid_level": "raid0", 00:09:17.845 "superblock": true, 00:09:17.845 "num_base_bdevs": 3, 00:09:17.845 "num_base_bdevs_discovered": 3, 00:09:17.845 "num_base_bdevs_operational": 3, 00:09:17.845 "base_bdevs_list": [ 00:09:17.845 { 00:09:17.845 "name": "pt1", 00:09:17.845 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.845 "is_configured": true, 00:09:17.845 "data_offset": 2048, 00:09:17.845 "data_size": 63488 00:09:17.845 }, 00:09:17.845 { 00:09:17.845 "name": "pt2", 00:09:17.845 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.845 "is_configured": true, 00:09:17.845 "data_offset": 2048, 00:09:17.845 "data_size": 63488 00:09:17.845 }, 00:09:17.845 { 00:09:17.845 "name": "pt3", 00:09:17.845 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.845 "is_configured": true, 00:09:17.845 "data_offset": 2048, 00:09:17.845 "data_size": 63488 00:09:17.845 } 00:09:17.845 ] 00:09:17.845 } 00:09:17.845 } 00:09:17.845 }' 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:17.845 pt2 00:09:17.845 pt3' 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.845 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.846 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.846 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.846 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:17.846 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.846 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.846 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.846 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.846 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.846 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.846 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:17.846 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:17.846 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.846 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.846 [2024-11-25 15:36:16.508888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a4845243-2568-4877-ad0a-97142aa57cce 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a4845243-2568-4877-ad0a-97142aa57cce ']' 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.107 [2024-11-25 15:36:16.552528] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:18.107 [2024-11-25 15:36:16.552612] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:18.107 [2024-11-25 15:36:16.552723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:18.107 [2024-11-25 15:36:16.552817] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:18.107 [2024-11-25 15:36:16.552860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.107 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.107 [2024-11-25 15:36:16.704306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:18.107 [2024-11-25 15:36:16.706490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:18.107 [2024-11-25 15:36:16.706624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:18.107 [2024-11-25 15:36:16.706686] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:18.107 [2024-11-25 15:36:16.706736] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:18.107 [2024-11-25 15:36:16.706755] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:18.107 [2024-11-25 15:36:16.706773] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:18.107 [2024-11-25 15:36:16.706784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:18.107 request: 00:09:18.107 { 00:09:18.107 "name": "raid_bdev1", 00:09:18.107 "raid_level": "raid0", 00:09:18.107 "base_bdevs": [ 00:09:18.107 "malloc1", 00:09:18.107 "malloc2", 00:09:18.107 "malloc3" 00:09:18.107 ], 00:09:18.107 "strip_size_kb": 64, 00:09:18.107 "superblock": false, 00:09:18.107 "method": "bdev_raid_create", 00:09:18.107 "req_id": 1 00:09:18.107 } 00:09:18.107 Got JSON-RPC error response 00:09:18.107 response: 00:09:18.107 { 00:09:18.107 "code": -17, 00:09:18.108 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:18.108 } 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.108 [2024-11-25 15:36:16.768137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:18.108 [2024-11-25 15:36:16.768227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.108 [2024-11-25 15:36:16.768261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:18.108 [2024-11-25 15:36:16.768288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.108 [2024-11-25 15:36:16.770812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.108 [2024-11-25 15:36:16.770889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:18.108 [2024-11-25 15:36:16.770991] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:18.108 [2024-11-25 15:36:16.771088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:18.108 pt1 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.108 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.368 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.368 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.368 "name": "raid_bdev1", 00:09:18.368 "uuid": "a4845243-2568-4877-ad0a-97142aa57cce", 00:09:18.368 "strip_size_kb": 64, 00:09:18.368 "state": "configuring", 00:09:18.368 "raid_level": "raid0", 00:09:18.368 "superblock": true, 00:09:18.368 "num_base_bdevs": 3, 00:09:18.368 "num_base_bdevs_discovered": 1, 00:09:18.368 "num_base_bdevs_operational": 3, 00:09:18.368 "base_bdevs_list": [ 00:09:18.368 { 00:09:18.368 "name": "pt1", 00:09:18.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.368 "is_configured": true, 00:09:18.368 "data_offset": 2048, 00:09:18.368 "data_size": 63488 00:09:18.368 }, 00:09:18.368 { 00:09:18.368 "name": null, 00:09:18.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.368 "is_configured": false, 00:09:18.368 "data_offset": 2048, 00:09:18.368 "data_size": 63488 00:09:18.368 }, 00:09:18.368 { 00:09:18.368 "name": null, 00:09:18.368 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.368 "is_configured": false, 00:09:18.368 "data_offset": 2048, 00:09:18.368 "data_size": 63488 00:09:18.368 } 00:09:18.368 ] 00:09:18.368 }' 00:09:18.368 15:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.368 15:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.628 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:18.628 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:18.628 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.628 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.628 [2024-11-25 15:36:17.239443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:18.628 [2024-11-25 15:36:17.239611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.628 [2024-11-25 15:36:17.239657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:18.628 [2024-11-25 15:36:17.239704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.628 [2024-11-25 15:36:17.240268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.628 [2024-11-25 15:36:17.240292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:18.628 [2024-11-25 15:36:17.240398] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:18.628 [2024-11-25 15:36:17.240422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:18.628 pt2 00:09:18.628 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.628 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.629 [2024-11-25 15:36:17.251391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.629 "name": "raid_bdev1", 00:09:18.629 "uuid": "a4845243-2568-4877-ad0a-97142aa57cce", 00:09:18.629 "strip_size_kb": 64, 00:09:18.629 "state": "configuring", 00:09:18.629 "raid_level": "raid0", 00:09:18.629 "superblock": true, 00:09:18.629 "num_base_bdevs": 3, 00:09:18.629 "num_base_bdevs_discovered": 1, 00:09:18.629 "num_base_bdevs_operational": 3, 00:09:18.629 "base_bdevs_list": [ 00:09:18.629 { 00:09:18.629 "name": "pt1", 00:09:18.629 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.629 "is_configured": true, 00:09:18.629 "data_offset": 2048, 00:09:18.629 "data_size": 63488 00:09:18.629 }, 00:09:18.629 { 00:09:18.629 "name": null, 00:09:18.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.629 "is_configured": false, 00:09:18.629 "data_offset": 0, 00:09:18.629 "data_size": 63488 00:09:18.629 }, 00:09:18.629 { 00:09:18.629 "name": null, 00:09:18.629 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.629 "is_configured": false, 00:09:18.629 "data_offset": 2048, 00:09:18.629 "data_size": 63488 00:09:18.629 } 00:09:18.629 ] 00:09:18.629 }' 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.629 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.199 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:19.199 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:19.199 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:19.199 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.199 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.199 [2024-11-25 15:36:17.686654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:19.199 [2024-11-25 15:36:17.686814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.199 [2024-11-25 15:36:17.686851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:19.199 [2024-11-25 15:36:17.686888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.199 [2024-11-25 15:36:17.687494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.199 [2024-11-25 15:36:17.687563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:19.199 [2024-11-25 15:36:17.687715] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:19.199 [2024-11-25 15:36:17.687770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:19.199 pt2 00:09:19.199 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.199 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:19.199 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:19.199 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:19.199 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.199 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.199 [2024-11-25 15:36:17.698576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:19.199 [2024-11-25 15:36:17.698662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.199 [2024-11-25 15:36:17.698697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:19.199 [2024-11-25 15:36:17.698730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.199 [2024-11-25 15:36:17.699166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.199 [2024-11-25 15:36:17.699227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:19.199 [2024-11-25 15:36:17.699314] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:19.199 [2024-11-25 15:36:17.699363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:19.199 [2024-11-25 15:36:17.699518] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:19.199 [2024-11-25 15:36:17.699558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:19.199 [2024-11-25 15:36:17.699835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:19.199 [2024-11-25 15:36:17.700032] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:19.199 [2024-11-25 15:36:17.700068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:19.199 [2024-11-25 15:36:17.700264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.199 pt3 00:09:19.199 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.199 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:19.199 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:19.200 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:19.200 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.200 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.200 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.200 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.200 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.200 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.200 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.200 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.200 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.200 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.200 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.200 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.200 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.200 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.200 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.200 "name": "raid_bdev1", 00:09:19.200 "uuid": "a4845243-2568-4877-ad0a-97142aa57cce", 00:09:19.200 "strip_size_kb": 64, 00:09:19.200 "state": "online", 00:09:19.200 "raid_level": "raid0", 00:09:19.200 "superblock": true, 00:09:19.200 "num_base_bdevs": 3, 00:09:19.200 "num_base_bdevs_discovered": 3, 00:09:19.200 "num_base_bdevs_operational": 3, 00:09:19.200 "base_bdevs_list": [ 00:09:19.200 { 00:09:19.200 "name": "pt1", 00:09:19.200 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.200 "is_configured": true, 00:09:19.200 "data_offset": 2048, 00:09:19.200 "data_size": 63488 00:09:19.200 }, 00:09:19.200 { 00:09:19.200 "name": "pt2", 00:09:19.200 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.200 "is_configured": true, 00:09:19.200 "data_offset": 2048, 00:09:19.200 "data_size": 63488 00:09:19.200 }, 00:09:19.200 { 00:09:19.200 "name": "pt3", 00:09:19.200 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.200 "is_configured": true, 00:09:19.200 "data_offset": 2048, 00:09:19.200 "data_size": 63488 00:09:19.200 } 00:09:19.200 ] 00:09:19.200 }' 00:09:19.200 15:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.200 15:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.460 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:19.460 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:19.460 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:19.460 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:19.460 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:19.460 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:19.460 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:19.460 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:19.460 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.460 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.460 [2024-11-25 15:36:18.130281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:19.720 "name": "raid_bdev1", 00:09:19.720 "aliases": [ 00:09:19.720 "a4845243-2568-4877-ad0a-97142aa57cce" 00:09:19.720 ], 00:09:19.720 "product_name": "Raid Volume", 00:09:19.720 "block_size": 512, 00:09:19.720 "num_blocks": 190464, 00:09:19.720 "uuid": "a4845243-2568-4877-ad0a-97142aa57cce", 00:09:19.720 "assigned_rate_limits": { 00:09:19.720 "rw_ios_per_sec": 0, 00:09:19.720 "rw_mbytes_per_sec": 0, 00:09:19.720 "r_mbytes_per_sec": 0, 00:09:19.720 "w_mbytes_per_sec": 0 00:09:19.720 }, 00:09:19.720 "claimed": false, 00:09:19.720 "zoned": false, 00:09:19.720 "supported_io_types": { 00:09:19.720 "read": true, 00:09:19.720 "write": true, 00:09:19.720 "unmap": true, 00:09:19.720 "flush": true, 00:09:19.720 "reset": true, 00:09:19.720 "nvme_admin": false, 00:09:19.720 "nvme_io": false, 00:09:19.720 "nvme_io_md": false, 00:09:19.720 "write_zeroes": true, 00:09:19.720 "zcopy": false, 00:09:19.720 "get_zone_info": false, 00:09:19.720 "zone_management": false, 00:09:19.720 "zone_append": false, 00:09:19.720 "compare": false, 00:09:19.720 "compare_and_write": false, 00:09:19.720 "abort": false, 00:09:19.720 "seek_hole": false, 00:09:19.720 "seek_data": false, 00:09:19.720 "copy": false, 00:09:19.720 "nvme_iov_md": false 00:09:19.720 }, 00:09:19.720 "memory_domains": [ 00:09:19.720 { 00:09:19.720 "dma_device_id": "system", 00:09:19.720 "dma_device_type": 1 00:09:19.720 }, 00:09:19.720 { 00:09:19.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.720 "dma_device_type": 2 00:09:19.720 }, 00:09:19.720 { 00:09:19.720 "dma_device_id": "system", 00:09:19.720 "dma_device_type": 1 00:09:19.720 }, 00:09:19.720 { 00:09:19.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.720 "dma_device_type": 2 00:09:19.720 }, 00:09:19.720 { 00:09:19.720 "dma_device_id": "system", 00:09:19.720 "dma_device_type": 1 00:09:19.720 }, 00:09:19.720 { 00:09:19.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.720 "dma_device_type": 2 00:09:19.720 } 00:09:19.720 ], 00:09:19.720 "driver_specific": { 00:09:19.720 "raid": { 00:09:19.720 "uuid": "a4845243-2568-4877-ad0a-97142aa57cce", 00:09:19.720 "strip_size_kb": 64, 00:09:19.720 "state": "online", 00:09:19.720 "raid_level": "raid0", 00:09:19.720 "superblock": true, 00:09:19.720 "num_base_bdevs": 3, 00:09:19.720 "num_base_bdevs_discovered": 3, 00:09:19.720 "num_base_bdevs_operational": 3, 00:09:19.720 "base_bdevs_list": [ 00:09:19.720 { 00:09:19.720 "name": "pt1", 00:09:19.720 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.720 "is_configured": true, 00:09:19.720 "data_offset": 2048, 00:09:19.720 "data_size": 63488 00:09:19.720 }, 00:09:19.720 { 00:09:19.720 "name": "pt2", 00:09:19.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.720 "is_configured": true, 00:09:19.720 "data_offset": 2048, 00:09:19.720 "data_size": 63488 00:09:19.720 }, 00:09:19.720 { 00:09:19.720 "name": "pt3", 00:09:19.720 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.720 "is_configured": true, 00:09:19.720 "data_offset": 2048, 00:09:19.720 "data_size": 63488 00:09:19.720 } 00:09:19.720 ] 00:09:19.720 } 00:09:19.720 } 00:09:19.720 }' 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:19.720 pt2 00:09:19.720 pt3' 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.720 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.980 [2024-11-25 15:36:18.409617] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a4845243-2568-4877-ad0a-97142aa57cce '!=' a4845243-2568-4877-ad0a-97142aa57cce ']' 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64844 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64844 ']' 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64844 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64844 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64844' 00:09:19.980 killing process with pid 64844 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64844 00:09:19.980 [2024-11-25 15:36:18.481122] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:19.980 15:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64844 00:09:19.980 [2024-11-25 15:36:18.481286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.981 [2024-11-25 15:36:18.481360] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.981 [2024-11-25 15:36:18.481374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:20.240 [2024-11-25 15:36:18.811086] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.622 15:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:21.622 00:09:21.622 real 0m5.293s 00:09:21.622 user 0m7.521s 00:09:21.622 sys 0m0.912s 00:09:21.622 15:36:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.622 ************************************ 00:09:21.622 END TEST raid_superblock_test 00:09:21.622 ************************************ 00:09:21.622 15:36:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.622 15:36:20 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:21.622 15:36:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:21.622 15:36:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.622 15:36:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.622 ************************************ 00:09:21.622 START TEST raid_read_error_test 00:09:21.622 ************************************ 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.shHucKIqJ1 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65103 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65103 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65103 ']' 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.622 15:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.622 [2024-11-25 15:36:20.162433] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:09:21.622 [2024-11-25 15:36:20.162672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65103 ] 00:09:21.881 [2024-11-25 15:36:20.336455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.881 [2024-11-25 15:36:20.474110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.141 [2024-11-25 15:36:20.707808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.141 [2024-11-25 15:36:20.707946] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.402 15:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.402 15:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:22.402 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.402 15:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:22.402 15:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.402 15:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.402 BaseBdev1_malloc 00:09:22.402 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.402 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:22.402 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.402 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.402 true 00:09:22.402 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.402 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:22.402 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.402 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.402 [2024-11-25 15:36:21.050930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:22.402 [2024-11-25 15:36:21.051003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.402 [2024-11-25 15:36:21.051034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:22.402 [2024-11-25 15:36:21.051047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.402 [2024-11-25 15:36:21.053483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.402 [2024-11-25 15:36:21.053522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:22.402 BaseBdev1 00:09:22.402 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.402 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.402 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:22.402 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.402 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.663 BaseBdev2_malloc 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.663 true 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.663 [2024-11-25 15:36:21.123928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:22.663 [2024-11-25 15:36:21.124069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.663 [2024-11-25 15:36:21.124101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:22.663 [2024-11-25 15:36:21.124127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.663 [2024-11-25 15:36:21.126441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.663 [2024-11-25 15:36:21.126542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:22.663 BaseBdev2 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.663 BaseBdev3_malloc 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.663 true 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.663 [2024-11-25 15:36:21.209287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:22.663 [2024-11-25 15:36:21.209338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.663 [2024-11-25 15:36:21.209355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:22.663 [2024-11-25 15:36:21.209367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.663 [2024-11-25 15:36:21.211708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.663 [2024-11-25 15:36:21.211803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:22.663 BaseBdev3 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.663 [2024-11-25 15:36:21.221343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.663 [2024-11-25 15:36:21.223418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.663 [2024-11-25 15:36:21.223542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.663 [2024-11-25 15:36:21.223758] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:22.663 [2024-11-25 15:36:21.223806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:22.663 [2024-11-25 15:36:21.224085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:22.663 [2024-11-25 15:36:21.224275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:22.663 [2024-11-25 15:36:21.224318] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:22.663 [2024-11-25 15:36:21.224488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.663 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.663 "name": "raid_bdev1", 00:09:22.663 "uuid": "2a9a1c95-191f-4607-b594-e2209344653c", 00:09:22.663 "strip_size_kb": 64, 00:09:22.663 "state": "online", 00:09:22.663 "raid_level": "raid0", 00:09:22.663 "superblock": true, 00:09:22.663 "num_base_bdevs": 3, 00:09:22.663 "num_base_bdevs_discovered": 3, 00:09:22.663 "num_base_bdevs_operational": 3, 00:09:22.663 "base_bdevs_list": [ 00:09:22.663 { 00:09:22.663 "name": "BaseBdev1", 00:09:22.663 "uuid": "c1840d33-e903-52bc-a7ce-e246f1959583", 00:09:22.663 "is_configured": true, 00:09:22.663 "data_offset": 2048, 00:09:22.663 "data_size": 63488 00:09:22.663 }, 00:09:22.663 { 00:09:22.663 "name": "BaseBdev2", 00:09:22.663 "uuid": "2d8d016f-b6ac-5578-b88f-f5c252c54018", 00:09:22.663 "is_configured": true, 00:09:22.664 "data_offset": 2048, 00:09:22.664 "data_size": 63488 00:09:22.664 }, 00:09:22.664 { 00:09:22.664 "name": "BaseBdev3", 00:09:22.664 "uuid": "36375f05-30c4-5784-a37a-ada670f2cbb8", 00:09:22.664 "is_configured": true, 00:09:22.664 "data_offset": 2048, 00:09:22.664 "data_size": 63488 00:09:22.664 } 00:09:22.664 ] 00:09:22.664 }' 00:09:22.664 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.664 15:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.234 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:23.234 15:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:23.234 [2024-11-25 15:36:21.714111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.192 "name": "raid_bdev1", 00:09:24.192 "uuid": "2a9a1c95-191f-4607-b594-e2209344653c", 00:09:24.192 "strip_size_kb": 64, 00:09:24.192 "state": "online", 00:09:24.192 "raid_level": "raid0", 00:09:24.192 "superblock": true, 00:09:24.192 "num_base_bdevs": 3, 00:09:24.192 "num_base_bdevs_discovered": 3, 00:09:24.192 "num_base_bdevs_operational": 3, 00:09:24.192 "base_bdevs_list": [ 00:09:24.192 { 00:09:24.192 "name": "BaseBdev1", 00:09:24.192 "uuid": "c1840d33-e903-52bc-a7ce-e246f1959583", 00:09:24.192 "is_configured": true, 00:09:24.192 "data_offset": 2048, 00:09:24.192 "data_size": 63488 00:09:24.192 }, 00:09:24.192 { 00:09:24.192 "name": "BaseBdev2", 00:09:24.192 "uuid": "2d8d016f-b6ac-5578-b88f-f5c252c54018", 00:09:24.192 "is_configured": true, 00:09:24.192 "data_offset": 2048, 00:09:24.192 "data_size": 63488 00:09:24.192 }, 00:09:24.192 { 00:09:24.192 "name": "BaseBdev3", 00:09:24.192 "uuid": "36375f05-30c4-5784-a37a-ada670f2cbb8", 00:09:24.192 "is_configured": true, 00:09:24.192 "data_offset": 2048, 00:09:24.192 "data_size": 63488 00:09:24.192 } 00:09:24.192 ] 00:09:24.192 }' 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.192 15:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.452 15:36:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:24.452 15:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.452 15:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.452 [2024-11-25 15:36:23.082858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:24.452 [2024-11-25 15:36:23.082999] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.452 [2024-11-25 15:36:23.085534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.452 [2024-11-25 15:36:23.085623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.452 [2024-11-25 15:36:23.085683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.452 [2024-11-25 15:36:23.085721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:24.452 { 00:09:24.452 "results": [ 00:09:24.452 { 00:09:24.452 "job": "raid_bdev1", 00:09:24.452 "core_mask": "0x1", 00:09:24.452 "workload": "randrw", 00:09:24.452 "percentage": 50, 00:09:24.452 "status": "finished", 00:09:24.452 "queue_depth": 1, 00:09:24.452 "io_size": 131072, 00:09:24.452 "runtime": 1.369419, 00:09:24.452 "iops": 13907.357791881082, 00:09:24.452 "mibps": 1738.4197239851353, 00:09:24.452 "io_failed": 1, 00:09:24.452 "io_timeout": 0, 00:09:24.452 "avg_latency_us": 101.2892627227026, 00:09:24.452 "min_latency_us": 25.152838427947597, 00:09:24.452 "max_latency_us": 1416.6078602620087 00:09:24.452 } 00:09:24.452 ], 00:09:24.452 "core_count": 1 00:09:24.452 } 00:09:24.452 15:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.453 15:36:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65103 00:09:24.453 15:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65103 ']' 00:09:24.453 15:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65103 00:09:24.453 15:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:24.453 15:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.453 15:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65103 00:09:24.453 15:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.453 15:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.453 killing process with pid 65103 00:09:24.453 15:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65103' 00:09:24.453 15:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65103 00:09:24.453 [2024-11-25 15:36:23.130754] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:24.453 15:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65103 00:09:24.713 [2024-11-25 15:36:23.383448] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.096 15:36:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.shHucKIqJ1 00:09:26.096 15:36:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:26.096 15:36:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:26.096 15:36:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:26.096 15:36:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:26.096 15:36:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:26.096 15:36:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:26.096 15:36:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:26.096 00:09:26.096 real 0m4.591s 00:09:26.096 user 0m5.291s 00:09:26.096 sys 0m0.630s 00:09:26.096 15:36:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.096 ************************************ 00:09:26.096 END TEST raid_read_error_test 00:09:26.096 ************************************ 00:09:26.096 15:36:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.096 15:36:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:26.096 15:36:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:26.096 15:36:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.096 15:36:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.096 ************************************ 00:09:26.096 START TEST raid_write_error_test 00:09:26.096 ************************************ 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CTkVaJTXKF 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65243 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65243 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65243 ']' 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.096 15:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.356 [2024-11-25 15:36:24.825753] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:09:26.356 [2024-11-25 15:36:24.825940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65243 ] 00:09:26.356 [2024-11-25 15:36:24.978202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.616 [2024-11-25 15:36:25.117040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.876 [2024-11-25 15:36:25.354774] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.876 [2024-11-25 15:36:25.354956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.136 BaseBdev1_malloc 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.136 true 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.136 [2024-11-25 15:36:25.722535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:27.136 [2024-11-25 15:36:25.722690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.136 [2024-11-25 15:36:25.722729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:27.136 [2024-11-25 15:36:25.722761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.136 [2024-11-25 15:36:25.725148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.136 [2024-11-25 15:36:25.725221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:27.136 BaseBdev1 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.136 BaseBdev2_malloc 00:09:27.136 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.137 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:27.137 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.137 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.137 true 00:09:27.137 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.137 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:27.137 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.137 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.137 [2024-11-25 15:36:25.795185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:27.137 [2024-11-25 15:36:25.795314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.137 [2024-11-25 15:36:25.795336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:27.137 [2024-11-25 15:36:25.795348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.137 [2024-11-25 15:36:25.797724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.137 [2024-11-25 15:36:25.797765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:27.137 BaseBdev2 00:09:27.137 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.137 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.137 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:27.137 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.137 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.397 BaseBdev3_malloc 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.397 true 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.397 [2024-11-25 15:36:25.880467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:27.397 [2024-11-25 15:36:25.880527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.397 [2024-11-25 15:36:25.880546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:27.397 [2024-11-25 15:36:25.880557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.397 [2024-11-25 15:36:25.882916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.397 [2024-11-25 15:36:25.883065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:27.397 BaseBdev3 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.397 [2024-11-25 15:36:25.892523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.397 [2024-11-25 15:36:25.894599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.397 [2024-11-25 15:36:25.894739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.397 [2024-11-25 15:36:25.894944] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:27.397 [2024-11-25 15:36:25.894960] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:27.397 [2024-11-25 15:36:25.895224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:27.397 [2024-11-25 15:36:25.895388] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:27.397 [2024-11-25 15:36:25.895404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:27.397 [2024-11-25 15:36:25.895546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.397 "name": "raid_bdev1", 00:09:27.397 "uuid": "d3724f7a-71f0-44be-86bc-ee4e88059847", 00:09:27.397 "strip_size_kb": 64, 00:09:27.397 "state": "online", 00:09:27.397 "raid_level": "raid0", 00:09:27.397 "superblock": true, 00:09:27.397 "num_base_bdevs": 3, 00:09:27.397 "num_base_bdevs_discovered": 3, 00:09:27.397 "num_base_bdevs_operational": 3, 00:09:27.397 "base_bdevs_list": [ 00:09:27.397 { 00:09:27.397 "name": "BaseBdev1", 00:09:27.397 "uuid": "1f9ec46e-e32f-5622-9800-be5a41fa7bb9", 00:09:27.397 "is_configured": true, 00:09:27.397 "data_offset": 2048, 00:09:27.397 "data_size": 63488 00:09:27.397 }, 00:09:27.397 { 00:09:27.397 "name": "BaseBdev2", 00:09:27.397 "uuid": "df636d47-0392-55e1-982b-c3c7f120eea6", 00:09:27.397 "is_configured": true, 00:09:27.397 "data_offset": 2048, 00:09:27.397 "data_size": 63488 00:09:27.397 }, 00:09:27.397 { 00:09:27.397 "name": "BaseBdev3", 00:09:27.397 "uuid": "e07b4475-cab2-5285-97b7-e23e849fd720", 00:09:27.397 "is_configured": true, 00:09:27.397 "data_offset": 2048, 00:09:27.397 "data_size": 63488 00:09:27.397 } 00:09:27.397 ] 00:09:27.397 }' 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.397 15:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.966 15:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:27.966 15:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:27.966 [2024-11-25 15:36:26.437094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:28.906 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:28.906 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.906 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.906 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.906 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:28.906 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:28.906 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:28.906 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:28.906 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.906 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.907 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.907 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.907 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.907 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.907 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.907 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.907 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.907 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.907 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.907 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.907 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.907 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.907 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.907 "name": "raid_bdev1", 00:09:28.907 "uuid": "d3724f7a-71f0-44be-86bc-ee4e88059847", 00:09:28.907 "strip_size_kb": 64, 00:09:28.907 "state": "online", 00:09:28.907 "raid_level": "raid0", 00:09:28.907 "superblock": true, 00:09:28.907 "num_base_bdevs": 3, 00:09:28.907 "num_base_bdevs_discovered": 3, 00:09:28.907 "num_base_bdevs_operational": 3, 00:09:28.907 "base_bdevs_list": [ 00:09:28.907 { 00:09:28.907 "name": "BaseBdev1", 00:09:28.907 "uuid": "1f9ec46e-e32f-5622-9800-be5a41fa7bb9", 00:09:28.907 "is_configured": true, 00:09:28.907 "data_offset": 2048, 00:09:28.907 "data_size": 63488 00:09:28.907 }, 00:09:28.907 { 00:09:28.907 "name": "BaseBdev2", 00:09:28.907 "uuid": "df636d47-0392-55e1-982b-c3c7f120eea6", 00:09:28.907 "is_configured": true, 00:09:28.907 "data_offset": 2048, 00:09:28.907 "data_size": 63488 00:09:28.907 }, 00:09:28.907 { 00:09:28.907 "name": "BaseBdev3", 00:09:28.907 "uuid": "e07b4475-cab2-5285-97b7-e23e849fd720", 00:09:28.907 "is_configured": true, 00:09:28.907 "data_offset": 2048, 00:09:28.907 "data_size": 63488 00:09:28.907 } 00:09:28.907 ] 00:09:28.907 }' 00:09:28.907 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.907 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.167 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:29.167 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.167 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.167 [2024-11-25 15:36:27.833674] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:29.167 [2024-11-25 15:36:27.833832] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.167 [2024-11-25 15:36:27.836457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.167 [2024-11-25 15:36:27.836500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.167 [2024-11-25 15:36:27.836541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.167 [2024-11-25 15:36:27.836550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:29.167 { 00:09:29.167 "results": [ 00:09:29.167 { 00:09:29.167 "job": "raid_bdev1", 00:09:29.167 "core_mask": "0x1", 00:09:29.167 "workload": "randrw", 00:09:29.167 "percentage": 50, 00:09:29.167 "status": "finished", 00:09:29.167 "queue_depth": 1, 00:09:29.167 "io_size": 131072, 00:09:29.167 "runtime": 1.397215, 00:09:29.167 "iops": 13938.441828923967, 00:09:29.167 "mibps": 1742.305228615496, 00:09:29.167 "io_failed": 1, 00:09:29.167 "io_timeout": 0, 00:09:29.167 "avg_latency_us": 101.03384517143931, 00:09:29.167 "min_latency_us": 25.4882096069869, 00:09:29.167 "max_latency_us": 1445.2262008733624 00:09:29.167 } 00:09:29.167 ], 00:09:29.167 "core_count": 1 00:09:29.167 } 00:09:29.167 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.167 15:36:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65243 00:09:29.167 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65243 ']' 00:09:29.167 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65243 00:09:29.167 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:29.167 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.428 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65243 00:09:29.428 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.428 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.428 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65243' 00:09:29.428 killing process with pid 65243 00:09:29.428 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65243 00:09:29.428 [2024-11-25 15:36:27.867545] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.428 15:36:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65243 00:09:29.686 [2024-11-25 15:36:28.114600] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:31.071 15:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:31.071 15:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CTkVaJTXKF 00:09:31.071 15:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:31.071 15:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:31.071 15:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:31.071 15:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:31.071 15:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:31.071 15:36:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:31.071 00:09:31.071 real 0m4.640s 00:09:31.071 user 0m5.419s 00:09:31.071 sys 0m0.634s 00:09:31.071 15:36:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.071 15:36:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.071 ************************************ 00:09:31.071 END TEST raid_write_error_test 00:09:31.071 ************************************ 00:09:31.071 15:36:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:31.071 15:36:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:31.071 15:36:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:31.071 15:36:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.071 15:36:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:31.071 ************************************ 00:09:31.071 START TEST raid_state_function_test 00:09:31.071 ************************************ 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:31.071 Process raid pid: 65393 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65393 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65393' 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65393 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65393 ']' 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.071 15:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.071 [2024-11-25 15:36:29.539642] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:09:31.071 [2024-11-25 15:36:29.539885] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.071 [2024-11-25 15:36:29.717277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.331 [2024-11-25 15:36:29.855912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.590 [2024-11-25 15:36:30.102829] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.590 [2024-11-25 15:36:30.102986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.850 [2024-11-25 15:36:30.359166] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.850 [2024-11-25 15:36:30.359341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.850 [2024-11-25 15:36:30.359372] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.850 [2024-11-25 15:36:30.359397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.850 [2024-11-25 15:36:30.359415] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.850 [2024-11-25 15:36:30.359438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.850 "name": "Existed_Raid", 00:09:31.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.850 "strip_size_kb": 64, 00:09:31.850 "state": "configuring", 00:09:31.850 "raid_level": "concat", 00:09:31.850 "superblock": false, 00:09:31.850 "num_base_bdevs": 3, 00:09:31.850 "num_base_bdevs_discovered": 0, 00:09:31.850 "num_base_bdevs_operational": 3, 00:09:31.850 "base_bdevs_list": [ 00:09:31.850 { 00:09:31.850 "name": "BaseBdev1", 00:09:31.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.850 "is_configured": false, 00:09:31.850 "data_offset": 0, 00:09:31.850 "data_size": 0 00:09:31.850 }, 00:09:31.850 { 00:09:31.850 "name": "BaseBdev2", 00:09:31.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.850 "is_configured": false, 00:09:31.850 "data_offset": 0, 00:09:31.850 "data_size": 0 00:09:31.850 }, 00:09:31.850 { 00:09:31.850 "name": "BaseBdev3", 00:09:31.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.850 "is_configured": false, 00:09:31.850 "data_offset": 0, 00:09:31.850 "data_size": 0 00:09:31.850 } 00:09:31.850 ] 00:09:31.850 }' 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.850 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.438 [2024-11-25 15:36:30.798356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:32.438 [2024-11-25 15:36:30.798449] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.438 [2024-11-25 15:36:30.810301] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.438 [2024-11-25 15:36:30.810384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.438 [2024-11-25 15:36:30.810413] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.438 [2024-11-25 15:36:30.810436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.438 [2024-11-25 15:36:30.810453] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.438 [2024-11-25 15:36:30.810481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.438 [2024-11-25 15:36:30.857162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.438 BaseBdev1 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.438 [ 00:09:32.438 { 00:09:32.438 "name": "BaseBdev1", 00:09:32.438 "aliases": [ 00:09:32.438 "3856ce65-5b6a-4919-acb9-0bc4df58a00b" 00:09:32.438 ], 00:09:32.438 "product_name": "Malloc disk", 00:09:32.438 "block_size": 512, 00:09:32.438 "num_blocks": 65536, 00:09:32.438 "uuid": "3856ce65-5b6a-4919-acb9-0bc4df58a00b", 00:09:32.438 "assigned_rate_limits": { 00:09:32.438 "rw_ios_per_sec": 0, 00:09:32.438 "rw_mbytes_per_sec": 0, 00:09:32.438 "r_mbytes_per_sec": 0, 00:09:32.438 "w_mbytes_per_sec": 0 00:09:32.438 }, 00:09:32.438 "claimed": true, 00:09:32.438 "claim_type": "exclusive_write", 00:09:32.438 "zoned": false, 00:09:32.438 "supported_io_types": { 00:09:32.438 "read": true, 00:09:32.438 "write": true, 00:09:32.438 "unmap": true, 00:09:32.438 "flush": true, 00:09:32.438 "reset": true, 00:09:32.438 "nvme_admin": false, 00:09:32.438 "nvme_io": false, 00:09:32.438 "nvme_io_md": false, 00:09:32.438 "write_zeroes": true, 00:09:32.438 "zcopy": true, 00:09:32.438 "get_zone_info": false, 00:09:32.438 "zone_management": false, 00:09:32.438 "zone_append": false, 00:09:32.438 "compare": false, 00:09:32.438 "compare_and_write": false, 00:09:32.438 "abort": true, 00:09:32.438 "seek_hole": false, 00:09:32.438 "seek_data": false, 00:09:32.438 "copy": true, 00:09:32.438 "nvme_iov_md": false 00:09:32.438 }, 00:09:32.438 "memory_domains": [ 00:09:32.438 { 00:09:32.438 "dma_device_id": "system", 00:09:32.438 "dma_device_type": 1 00:09:32.438 }, 00:09:32.438 { 00:09:32.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.438 "dma_device_type": 2 00:09:32.438 } 00:09:32.438 ], 00:09:32.438 "driver_specific": {} 00:09:32.438 } 00:09:32.438 ] 00:09:32.438 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.439 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:32.439 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.439 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.439 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.439 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.439 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.439 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.439 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.439 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.439 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.439 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.439 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.439 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.439 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.439 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.439 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.439 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.439 "name": "Existed_Raid", 00:09:32.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.439 "strip_size_kb": 64, 00:09:32.439 "state": "configuring", 00:09:32.439 "raid_level": "concat", 00:09:32.439 "superblock": false, 00:09:32.439 "num_base_bdevs": 3, 00:09:32.439 "num_base_bdevs_discovered": 1, 00:09:32.439 "num_base_bdevs_operational": 3, 00:09:32.439 "base_bdevs_list": [ 00:09:32.439 { 00:09:32.439 "name": "BaseBdev1", 00:09:32.439 "uuid": "3856ce65-5b6a-4919-acb9-0bc4df58a00b", 00:09:32.439 "is_configured": true, 00:09:32.439 "data_offset": 0, 00:09:32.439 "data_size": 65536 00:09:32.439 }, 00:09:32.439 { 00:09:32.439 "name": "BaseBdev2", 00:09:32.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.439 "is_configured": false, 00:09:32.439 "data_offset": 0, 00:09:32.439 "data_size": 0 00:09:32.439 }, 00:09:32.439 { 00:09:32.439 "name": "BaseBdev3", 00:09:32.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.439 "is_configured": false, 00:09:32.439 "data_offset": 0, 00:09:32.439 "data_size": 0 00:09:32.439 } 00:09:32.439 ] 00:09:32.439 }' 00:09:32.439 15:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.439 15:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.705 [2024-11-25 15:36:31.360354] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:32.705 [2024-11-25 15:36:31.360469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.705 [2024-11-25 15:36:31.372370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.705 [2024-11-25 15:36:31.374240] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.705 [2024-11-25 15:36:31.374317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.705 [2024-11-25 15:36:31.374332] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.705 [2024-11-25 15:36:31.374342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.705 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.965 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.965 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.965 "name": "Existed_Raid", 00:09:32.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.965 "strip_size_kb": 64, 00:09:32.965 "state": "configuring", 00:09:32.965 "raid_level": "concat", 00:09:32.965 "superblock": false, 00:09:32.965 "num_base_bdevs": 3, 00:09:32.965 "num_base_bdevs_discovered": 1, 00:09:32.965 "num_base_bdevs_operational": 3, 00:09:32.965 "base_bdevs_list": [ 00:09:32.965 { 00:09:32.965 "name": "BaseBdev1", 00:09:32.965 "uuid": "3856ce65-5b6a-4919-acb9-0bc4df58a00b", 00:09:32.965 "is_configured": true, 00:09:32.965 "data_offset": 0, 00:09:32.965 "data_size": 65536 00:09:32.965 }, 00:09:32.965 { 00:09:32.965 "name": "BaseBdev2", 00:09:32.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.965 "is_configured": false, 00:09:32.965 "data_offset": 0, 00:09:32.965 "data_size": 0 00:09:32.965 }, 00:09:32.965 { 00:09:32.965 "name": "BaseBdev3", 00:09:32.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.965 "is_configured": false, 00:09:32.965 "data_offset": 0, 00:09:32.965 "data_size": 0 00:09:32.965 } 00:09:32.965 ] 00:09:32.965 }' 00:09:32.965 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.965 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.225 [2024-11-25 15:36:31.819919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.225 BaseBdev2 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.225 [ 00:09:33.225 { 00:09:33.225 "name": "BaseBdev2", 00:09:33.225 "aliases": [ 00:09:33.225 "830cb3ac-be26-4388-82cd-b08743f4c7bf" 00:09:33.225 ], 00:09:33.225 "product_name": "Malloc disk", 00:09:33.225 "block_size": 512, 00:09:33.225 "num_blocks": 65536, 00:09:33.225 "uuid": "830cb3ac-be26-4388-82cd-b08743f4c7bf", 00:09:33.225 "assigned_rate_limits": { 00:09:33.225 "rw_ios_per_sec": 0, 00:09:33.225 "rw_mbytes_per_sec": 0, 00:09:33.225 "r_mbytes_per_sec": 0, 00:09:33.225 "w_mbytes_per_sec": 0 00:09:33.225 }, 00:09:33.225 "claimed": true, 00:09:33.225 "claim_type": "exclusive_write", 00:09:33.225 "zoned": false, 00:09:33.225 "supported_io_types": { 00:09:33.225 "read": true, 00:09:33.225 "write": true, 00:09:33.225 "unmap": true, 00:09:33.225 "flush": true, 00:09:33.225 "reset": true, 00:09:33.225 "nvme_admin": false, 00:09:33.225 "nvme_io": false, 00:09:33.225 "nvme_io_md": false, 00:09:33.225 "write_zeroes": true, 00:09:33.225 "zcopy": true, 00:09:33.225 "get_zone_info": false, 00:09:33.225 "zone_management": false, 00:09:33.225 "zone_append": false, 00:09:33.225 "compare": false, 00:09:33.225 "compare_and_write": false, 00:09:33.225 "abort": true, 00:09:33.225 "seek_hole": false, 00:09:33.225 "seek_data": false, 00:09:33.225 "copy": true, 00:09:33.225 "nvme_iov_md": false 00:09:33.225 }, 00:09:33.225 "memory_domains": [ 00:09:33.225 { 00:09:33.225 "dma_device_id": "system", 00:09:33.225 "dma_device_type": 1 00:09:33.225 }, 00:09:33.225 { 00:09:33.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.225 "dma_device_type": 2 00:09:33.225 } 00:09:33.225 ], 00:09:33.225 "driver_specific": {} 00:09:33.225 } 00:09:33.225 ] 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:33.225 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:33.226 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.226 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.226 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.226 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.226 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.226 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.226 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.226 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.226 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.226 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.226 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.226 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.226 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.226 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.226 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.226 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.484 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.484 "name": "Existed_Raid", 00:09:33.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.484 "strip_size_kb": 64, 00:09:33.484 "state": "configuring", 00:09:33.484 "raid_level": "concat", 00:09:33.484 "superblock": false, 00:09:33.484 "num_base_bdevs": 3, 00:09:33.484 "num_base_bdevs_discovered": 2, 00:09:33.484 "num_base_bdevs_operational": 3, 00:09:33.484 "base_bdevs_list": [ 00:09:33.484 { 00:09:33.484 "name": "BaseBdev1", 00:09:33.484 "uuid": "3856ce65-5b6a-4919-acb9-0bc4df58a00b", 00:09:33.484 "is_configured": true, 00:09:33.484 "data_offset": 0, 00:09:33.484 "data_size": 65536 00:09:33.484 }, 00:09:33.484 { 00:09:33.484 "name": "BaseBdev2", 00:09:33.484 "uuid": "830cb3ac-be26-4388-82cd-b08743f4c7bf", 00:09:33.484 "is_configured": true, 00:09:33.484 "data_offset": 0, 00:09:33.484 "data_size": 65536 00:09:33.484 }, 00:09:33.484 { 00:09:33.484 "name": "BaseBdev3", 00:09:33.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.484 "is_configured": false, 00:09:33.484 "data_offset": 0, 00:09:33.484 "data_size": 0 00:09:33.484 } 00:09:33.484 ] 00:09:33.484 }' 00:09:33.484 15:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.484 15:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.743 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:33.743 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.743 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.743 [2024-11-25 15:36:32.359575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:33.743 [2024-11-25 15:36:32.359724] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:33.743 [2024-11-25 15:36:32.359756] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:33.743 [2024-11-25 15:36:32.360074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:33.743 [2024-11-25 15:36:32.360296] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:33.743 [2024-11-25 15:36:32.360339] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:33.743 [2024-11-25 15:36:32.360627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.743 BaseBdev3 00:09:33.743 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.743 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:33.743 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:33.743 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.743 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:33.743 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.743 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.743 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.743 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.743 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.743 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.743 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:33.743 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.743 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.743 [ 00:09:33.743 { 00:09:33.743 "name": "BaseBdev3", 00:09:33.743 "aliases": [ 00:09:33.743 "82becbbb-9490-439c-ad33-bbb2463ff236" 00:09:33.743 ], 00:09:33.743 "product_name": "Malloc disk", 00:09:33.743 "block_size": 512, 00:09:33.743 "num_blocks": 65536, 00:09:33.743 "uuid": "82becbbb-9490-439c-ad33-bbb2463ff236", 00:09:33.743 "assigned_rate_limits": { 00:09:33.743 "rw_ios_per_sec": 0, 00:09:33.743 "rw_mbytes_per_sec": 0, 00:09:33.743 "r_mbytes_per_sec": 0, 00:09:33.743 "w_mbytes_per_sec": 0 00:09:33.743 }, 00:09:33.743 "claimed": true, 00:09:33.743 "claim_type": "exclusive_write", 00:09:33.743 "zoned": false, 00:09:33.743 "supported_io_types": { 00:09:33.744 "read": true, 00:09:33.744 "write": true, 00:09:33.744 "unmap": true, 00:09:33.744 "flush": true, 00:09:33.744 "reset": true, 00:09:33.744 "nvme_admin": false, 00:09:33.744 "nvme_io": false, 00:09:33.744 "nvme_io_md": false, 00:09:33.744 "write_zeroes": true, 00:09:33.744 "zcopy": true, 00:09:33.744 "get_zone_info": false, 00:09:33.744 "zone_management": false, 00:09:33.744 "zone_append": false, 00:09:33.744 "compare": false, 00:09:33.744 "compare_and_write": false, 00:09:33.744 "abort": true, 00:09:33.744 "seek_hole": false, 00:09:33.744 "seek_data": false, 00:09:33.744 "copy": true, 00:09:33.744 "nvme_iov_md": false 00:09:33.744 }, 00:09:33.744 "memory_domains": [ 00:09:33.744 { 00:09:33.744 "dma_device_id": "system", 00:09:33.744 "dma_device_type": 1 00:09:33.744 }, 00:09:33.744 { 00:09:33.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.744 "dma_device_type": 2 00:09:33.744 } 00:09:33.744 ], 00:09:33.744 "driver_specific": {} 00:09:33.744 } 00:09:33.744 ] 00:09:33.744 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.744 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:33.744 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:33.744 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.744 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:33.744 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.744 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.744 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.744 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.744 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.744 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.744 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.744 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.744 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.744 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.744 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.744 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.744 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.744 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.004 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.004 "name": "Existed_Raid", 00:09:34.004 "uuid": "a369e86f-86ec-4b74-923d-ba54b230c2a7", 00:09:34.004 "strip_size_kb": 64, 00:09:34.004 "state": "online", 00:09:34.004 "raid_level": "concat", 00:09:34.004 "superblock": false, 00:09:34.004 "num_base_bdevs": 3, 00:09:34.004 "num_base_bdevs_discovered": 3, 00:09:34.004 "num_base_bdevs_operational": 3, 00:09:34.004 "base_bdevs_list": [ 00:09:34.004 { 00:09:34.004 "name": "BaseBdev1", 00:09:34.004 "uuid": "3856ce65-5b6a-4919-acb9-0bc4df58a00b", 00:09:34.004 "is_configured": true, 00:09:34.004 "data_offset": 0, 00:09:34.004 "data_size": 65536 00:09:34.004 }, 00:09:34.004 { 00:09:34.004 "name": "BaseBdev2", 00:09:34.004 "uuid": "830cb3ac-be26-4388-82cd-b08743f4c7bf", 00:09:34.004 "is_configured": true, 00:09:34.004 "data_offset": 0, 00:09:34.004 "data_size": 65536 00:09:34.004 }, 00:09:34.004 { 00:09:34.004 "name": "BaseBdev3", 00:09:34.004 "uuid": "82becbbb-9490-439c-ad33-bbb2463ff236", 00:09:34.004 "is_configured": true, 00:09:34.004 "data_offset": 0, 00:09:34.004 "data_size": 65536 00:09:34.004 } 00:09:34.004 ] 00:09:34.004 }' 00:09:34.004 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.004 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.264 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:34.264 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:34.264 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:34.264 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:34.264 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:34.264 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:34.264 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:34.264 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:34.264 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.264 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.264 [2024-11-25 15:36:32.811191] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.264 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.264 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:34.264 "name": "Existed_Raid", 00:09:34.264 "aliases": [ 00:09:34.264 "a369e86f-86ec-4b74-923d-ba54b230c2a7" 00:09:34.264 ], 00:09:34.264 "product_name": "Raid Volume", 00:09:34.264 "block_size": 512, 00:09:34.264 "num_blocks": 196608, 00:09:34.264 "uuid": "a369e86f-86ec-4b74-923d-ba54b230c2a7", 00:09:34.264 "assigned_rate_limits": { 00:09:34.264 "rw_ios_per_sec": 0, 00:09:34.264 "rw_mbytes_per_sec": 0, 00:09:34.264 "r_mbytes_per_sec": 0, 00:09:34.264 "w_mbytes_per_sec": 0 00:09:34.264 }, 00:09:34.264 "claimed": false, 00:09:34.264 "zoned": false, 00:09:34.264 "supported_io_types": { 00:09:34.264 "read": true, 00:09:34.264 "write": true, 00:09:34.264 "unmap": true, 00:09:34.264 "flush": true, 00:09:34.264 "reset": true, 00:09:34.264 "nvme_admin": false, 00:09:34.264 "nvme_io": false, 00:09:34.264 "nvme_io_md": false, 00:09:34.264 "write_zeroes": true, 00:09:34.264 "zcopy": false, 00:09:34.264 "get_zone_info": false, 00:09:34.264 "zone_management": false, 00:09:34.264 "zone_append": false, 00:09:34.264 "compare": false, 00:09:34.264 "compare_and_write": false, 00:09:34.264 "abort": false, 00:09:34.264 "seek_hole": false, 00:09:34.264 "seek_data": false, 00:09:34.264 "copy": false, 00:09:34.264 "nvme_iov_md": false 00:09:34.264 }, 00:09:34.264 "memory_domains": [ 00:09:34.264 { 00:09:34.264 "dma_device_id": "system", 00:09:34.264 "dma_device_type": 1 00:09:34.264 }, 00:09:34.264 { 00:09:34.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.264 "dma_device_type": 2 00:09:34.264 }, 00:09:34.264 { 00:09:34.264 "dma_device_id": "system", 00:09:34.264 "dma_device_type": 1 00:09:34.264 }, 00:09:34.264 { 00:09:34.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.264 "dma_device_type": 2 00:09:34.264 }, 00:09:34.264 { 00:09:34.264 "dma_device_id": "system", 00:09:34.264 "dma_device_type": 1 00:09:34.264 }, 00:09:34.264 { 00:09:34.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.264 "dma_device_type": 2 00:09:34.265 } 00:09:34.265 ], 00:09:34.265 "driver_specific": { 00:09:34.265 "raid": { 00:09:34.265 "uuid": "a369e86f-86ec-4b74-923d-ba54b230c2a7", 00:09:34.265 "strip_size_kb": 64, 00:09:34.265 "state": "online", 00:09:34.265 "raid_level": "concat", 00:09:34.265 "superblock": false, 00:09:34.265 "num_base_bdevs": 3, 00:09:34.265 "num_base_bdevs_discovered": 3, 00:09:34.265 "num_base_bdevs_operational": 3, 00:09:34.265 "base_bdevs_list": [ 00:09:34.265 { 00:09:34.265 "name": "BaseBdev1", 00:09:34.265 "uuid": "3856ce65-5b6a-4919-acb9-0bc4df58a00b", 00:09:34.265 "is_configured": true, 00:09:34.265 "data_offset": 0, 00:09:34.265 "data_size": 65536 00:09:34.265 }, 00:09:34.265 { 00:09:34.265 "name": "BaseBdev2", 00:09:34.265 "uuid": "830cb3ac-be26-4388-82cd-b08743f4c7bf", 00:09:34.265 "is_configured": true, 00:09:34.265 "data_offset": 0, 00:09:34.265 "data_size": 65536 00:09:34.265 }, 00:09:34.265 { 00:09:34.265 "name": "BaseBdev3", 00:09:34.265 "uuid": "82becbbb-9490-439c-ad33-bbb2463ff236", 00:09:34.265 "is_configured": true, 00:09:34.265 "data_offset": 0, 00:09:34.265 "data_size": 65536 00:09:34.265 } 00:09:34.265 ] 00:09:34.265 } 00:09:34.265 } 00:09:34.265 }' 00:09:34.265 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:34.265 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:34.265 BaseBdev2 00:09:34.265 BaseBdev3' 00:09:34.265 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.265 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:34.265 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.265 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:34.265 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.265 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.265 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.524 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.525 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.525 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.525 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.525 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:34.525 15:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.525 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.525 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.525 15:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.525 [2024-11-25 15:36:33.086483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:34.525 [2024-11-25 15:36:33.086569] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.525 [2024-11-25 15:36:33.086682] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.525 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.785 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.785 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.785 "name": "Existed_Raid", 00:09:34.785 "uuid": "a369e86f-86ec-4b74-923d-ba54b230c2a7", 00:09:34.785 "strip_size_kb": 64, 00:09:34.785 "state": "offline", 00:09:34.785 "raid_level": "concat", 00:09:34.785 "superblock": false, 00:09:34.785 "num_base_bdevs": 3, 00:09:34.785 "num_base_bdevs_discovered": 2, 00:09:34.785 "num_base_bdevs_operational": 2, 00:09:34.785 "base_bdevs_list": [ 00:09:34.785 { 00:09:34.785 "name": null, 00:09:34.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.785 "is_configured": false, 00:09:34.785 "data_offset": 0, 00:09:34.785 "data_size": 65536 00:09:34.785 }, 00:09:34.785 { 00:09:34.785 "name": "BaseBdev2", 00:09:34.785 "uuid": "830cb3ac-be26-4388-82cd-b08743f4c7bf", 00:09:34.785 "is_configured": true, 00:09:34.785 "data_offset": 0, 00:09:34.785 "data_size": 65536 00:09:34.785 }, 00:09:34.785 { 00:09:34.785 "name": "BaseBdev3", 00:09:34.785 "uuid": "82becbbb-9490-439c-ad33-bbb2463ff236", 00:09:34.785 "is_configured": true, 00:09:34.785 "data_offset": 0, 00:09:34.785 "data_size": 65536 00:09:34.785 } 00:09:34.785 ] 00:09:34.785 }' 00:09:34.785 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.785 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.046 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:35.046 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.046 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.046 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.046 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.046 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.046 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.046 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:35.046 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:35.046 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:35.046 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.046 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.046 [2024-11-25 15:36:33.683217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.306 [2024-11-25 15:36:33.834781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:35.306 [2024-11-25 15:36:33.834887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.306 15:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.566 BaseBdev2 00:09:35.566 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.566 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:35.566 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:35.566 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.566 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:35.566 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.566 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.566 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.566 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.566 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.566 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.567 [ 00:09:35.567 { 00:09:35.567 "name": "BaseBdev2", 00:09:35.567 "aliases": [ 00:09:35.567 "4588f574-ae70-4876-a8b9-fe5f53514844" 00:09:35.567 ], 00:09:35.567 "product_name": "Malloc disk", 00:09:35.567 "block_size": 512, 00:09:35.567 "num_blocks": 65536, 00:09:35.567 "uuid": "4588f574-ae70-4876-a8b9-fe5f53514844", 00:09:35.567 "assigned_rate_limits": { 00:09:35.567 "rw_ios_per_sec": 0, 00:09:35.567 "rw_mbytes_per_sec": 0, 00:09:35.567 "r_mbytes_per_sec": 0, 00:09:35.567 "w_mbytes_per_sec": 0 00:09:35.567 }, 00:09:35.567 "claimed": false, 00:09:35.567 "zoned": false, 00:09:35.567 "supported_io_types": { 00:09:35.567 "read": true, 00:09:35.567 "write": true, 00:09:35.567 "unmap": true, 00:09:35.567 "flush": true, 00:09:35.567 "reset": true, 00:09:35.567 "nvme_admin": false, 00:09:35.567 "nvme_io": false, 00:09:35.567 "nvme_io_md": false, 00:09:35.567 "write_zeroes": true, 00:09:35.567 "zcopy": true, 00:09:35.567 "get_zone_info": false, 00:09:35.567 "zone_management": false, 00:09:35.567 "zone_append": false, 00:09:35.567 "compare": false, 00:09:35.567 "compare_and_write": false, 00:09:35.567 "abort": true, 00:09:35.567 "seek_hole": false, 00:09:35.567 "seek_data": false, 00:09:35.567 "copy": true, 00:09:35.567 "nvme_iov_md": false 00:09:35.567 }, 00:09:35.567 "memory_domains": [ 00:09:35.567 { 00:09:35.567 "dma_device_id": "system", 00:09:35.567 "dma_device_type": 1 00:09:35.567 }, 00:09:35.567 { 00:09:35.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.567 "dma_device_type": 2 00:09:35.567 } 00:09:35.567 ], 00:09:35.567 "driver_specific": {} 00:09:35.567 } 00:09:35.567 ] 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.567 BaseBdev3 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.567 [ 00:09:35.567 { 00:09:35.567 "name": "BaseBdev3", 00:09:35.567 "aliases": [ 00:09:35.567 "34112ac3-0282-454d-86d6-4ab5cae8e794" 00:09:35.567 ], 00:09:35.567 "product_name": "Malloc disk", 00:09:35.567 "block_size": 512, 00:09:35.567 "num_blocks": 65536, 00:09:35.567 "uuid": "34112ac3-0282-454d-86d6-4ab5cae8e794", 00:09:35.567 "assigned_rate_limits": { 00:09:35.567 "rw_ios_per_sec": 0, 00:09:35.567 "rw_mbytes_per_sec": 0, 00:09:35.567 "r_mbytes_per_sec": 0, 00:09:35.567 "w_mbytes_per_sec": 0 00:09:35.567 }, 00:09:35.567 "claimed": false, 00:09:35.567 "zoned": false, 00:09:35.567 "supported_io_types": { 00:09:35.567 "read": true, 00:09:35.567 "write": true, 00:09:35.567 "unmap": true, 00:09:35.567 "flush": true, 00:09:35.567 "reset": true, 00:09:35.567 "nvme_admin": false, 00:09:35.567 "nvme_io": false, 00:09:35.567 "nvme_io_md": false, 00:09:35.567 "write_zeroes": true, 00:09:35.567 "zcopy": true, 00:09:35.567 "get_zone_info": false, 00:09:35.567 "zone_management": false, 00:09:35.567 "zone_append": false, 00:09:35.567 "compare": false, 00:09:35.567 "compare_and_write": false, 00:09:35.567 "abort": true, 00:09:35.567 "seek_hole": false, 00:09:35.567 "seek_data": false, 00:09:35.567 "copy": true, 00:09:35.567 "nvme_iov_md": false 00:09:35.567 }, 00:09:35.567 "memory_domains": [ 00:09:35.567 { 00:09:35.567 "dma_device_id": "system", 00:09:35.567 "dma_device_type": 1 00:09:35.567 }, 00:09:35.567 { 00:09:35.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.567 "dma_device_type": 2 00:09:35.567 } 00:09:35.567 ], 00:09:35.567 "driver_specific": {} 00:09:35.567 } 00:09:35.567 ] 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.567 [2024-11-25 15:36:34.144816] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:35.567 [2024-11-25 15:36:34.144903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:35.567 [2024-11-25 15:36:34.144944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.567 [2024-11-25 15:36:34.146676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.567 "name": "Existed_Raid", 00:09:35.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.567 "strip_size_kb": 64, 00:09:35.567 "state": "configuring", 00:09:35.567 "raid_level": "concat", 00:09:35.567 "superblock": false, 00:09:35.567 "num_base_bdevs": 3, 00:09:35.567 "num_base_bdevs_discovered": 2, 00:09:35.567 "num_base_bdevs_operational": 3, 00:09:35.567 "base_bdevs_list": [ 00:09:35.567 { 00:09:35.567 "name": "BaseBdev1", 00:09:35.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.567 "is_configured": false, 00:09:35.567 "data_offset": 0, 00:09:35.567 "data_size": 0 00:09:35.567 }, 00:09:35.567 { 00:09:35.567 "name": "BaseBdev2", 00:09:35.567 "uuid": "4588f574-ae70-4876-a8b9-fe5f53514844", 00:09:35.567 "is_configured": true, 00:09:35.567 "data_offset": 0, 00:09:35.567 "data_size": 65536 00:09:35.567 }, 00:09:35.567 { 00:09:35.567 "name": "BaseBdev3", 00:09:35.567 "uuid": "34112ac3-0282-454d-86d6-4ab5cae8e794", 00:09:35.567 "is_configured": true, 00:09:35.567 "data_offset": 0, 00:09:35.567 "data_size": 65536 00:09:35.567 } 00:09:35.567 ] 00:09:35.567 }' 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.567 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.138 [2024-11-25 15:36:34.600098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.138 "name": "Existed_Raid", 00:09:36.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.138 "strip_size_kb": 64, 00:09:36.138 "state": "configuring", 00:09:36.138 "raid_level": "concat", 00:09:36.138 "superblock": false, 00:09:36.138 "num_base_bdevs": 3, 00:09:36.138 "num_base_bdevs_discovered": 1, 00:09:36.138 "num_base_bdevs_operational": 3, 00:09:36.138 "base_bdevs_list": [ 00:09:36.138 { 00:09:36.138 "name": "BaseBdev1", 00:09:36.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.138 "is_configured": false, 00:09:36.138 "data_offset": 0, 00:09:36.138 "data_size": 0 00:09:36.138 }, 00:09:36.138 { 00:09:36.138 "name": null, 00:09:36.138 "uuid": "4588f574-ae70-4876-a8b9-fe5f53514844", 00:09:36.138 "is_configured": false, 00:09:36.138 "data_offset": 0, 00:09:36.138 "data_size": 65536 00:09:36.138 }, 00:09:36.138 { 00:09:36.138 "name": "BaseBdev3", 00:09:36.138 "uuid": "34112ac3-0282-454d-86d6-4ab5cae8e794", 00:09:36.138 "is_configured": true, 00:09:36.138 "data_offset": 0, 00:09:36.138 "data_size": 65536 00:09:36.138 } 00:09:36.138 ] 00:09:36.138 }' 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.138 15:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.708 [2024-11-25 15:36:35.159902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:36.708 BaseBdev1 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.708 [ 00:09:36.708 { 00:09:36.708 "name": "BaseBdev1", 00:09:36.708 "aliases": [ 00:09:36.708 "50097000-08fc-4911-8597-831522102d84" 00:09:36.708 ], 00:09:36.708 "product_name": "Malloc disk", 00:09:36.708 "block_size": 512, 00:09:36.708 "num_blocks": 65536, 00:09:36.708 "uuid": "50097000-08fc-4911-8597-831522102d84", 00:09:36.708 "assigned_rate_limits": { 00:09:36.708 "rw_ios_per_sec": 0, 00:09:36.708 "rw_mbytes_per_sec": 0, 00:09:36.708 "r_mbytes_per_sec": 0, 00:09:36.708 "w_mbytes_per_sec": 0 00:09:36.708 }, 00:09:36.708 "claimed": true, 00:09:36.708 "claim_type": "exclusive_write", 00:09:36.708 "zoned": false, 00:09:36.708 "supported_io_types": { 00:09:36.708 "read": true, 00:09:36.708 "write": true, 00:09:36.708 "unmap": true, 00:09:36.708 "flush": true, 00:09:36.708 "reset": true, 00:09:36.708 "nvme_admin": false, 00:09:36.708 "nvme_io": false, 00:09:36.708 "nvme_io_md": false, 00:09:36.708 "write_zeroes": true, 00:09:36.708 "zcopy": true, 00:09:36.708 "get_zone_info": false, 00:09:36.708 "zone_management": false, 00:09:36.708 "zone_append": false, 00:09:36.708 "compare": false, 00:09:36.708 "compare_and_write": false, 00:09:36.708 "abort": true, 00:09:36.708 "seek_hole": false, 00:09:36.708 "seek_data": false, 00:09:36.708 "copy": true, 00:09:36.708 "nvme_iov_md": false 00:09:36.708 }, 00:09:36.708 "memory_domains": [ 00:09:36.708 { 00:09:36.708 "dma_device_id": "system", 00:09:36.708 "dma_device_type": 1 00:09:36.708 }, 00:09:36.708 { 00:09:36.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.708 "dma_device_type": 2 00:09:36.708 } 00:09:36.708 ], 00:09:36.708 "driver_specific": {} 00:09:36.708 } 00:09:36.708 ] 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.708 "name": "Existed_Raid", 00:09:36.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.708 "strip_size_kb": 64, 00:09:36.708 "state": "configuring", 00:09:36.708 "raid_level": "concat", 00:09:36.708 "superblock": false, 00:09:36.708 "num_base_bdevs": 3, 00:09:36.708 "num_base_bdevs_discovered": 2, 00:09:36.708 "num_base_bdevs_operational": 3, 00:09:36.708 "base_bdevs_list": [ 00:09:36.708 { 00:09:36.708 "name": "BaseBdev1", 00:09:36.708 "uuid": "50097000-08fc-4911-8597-831522102d84", 00:09:36.708 "is_configured": true, 00:09:36.708 "data_offset": 0, 00:09:36.708 "data_size": 65536 00:09:36.708 }, 00:09:36.708 { 00:09:36.708 "name": null, 00:09:36.708 "uuid": "4588f574-ae70-4876-a8b9-fe5f53514844", 00:09:36.708 "is_configured": false, 00:09:36.708 "data_offset": 0, 00:09:36.708 "data_size": 65536 00:09:36.708 }, 00:09:36.708 { 00:09:36.708 "name": "BaseBdev3", 00:09:36.708 "uuid": "34112ac3-0282-454d-86d6-4ab5cae8e794", 00:09:36.708 "is_configured": true, 00:09:36.708 "data_offset": 0, 00:09:36.708 "data_size": 65536 00:09:36.708 } 00:09:36.708 ] 00:09:36.708 }' 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.708 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.278 [2024-11-25 15:36:35.731066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.278 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.278 "name": "Existed_Raid", 00:09:37.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.278 "strip_size_kb": 64, 00:09:37.278 "state": "configuring", 00:09:37.278 "raid_level": "concat", 00:09:37.278 "superblock": false, 00:09:37.278 "num_base_bdevs": 3, 00:09:37.278 "num_base_bdevs_discovered": 1, 00:09:37.278 "num_base_bdevs_operational": 3, 00:09:37.278 "base_bdevs_list": [ 00:09:37.278 { 00:09:37.278 "name": "BaseBdev1", 00:09:37.279 "uuid": "50097000-08fc-4911-8597-831522102d84", 00:09:37.279 "is_configured": true, 00:09:37.279 "data_offset": 0, 00:09:37.279 "data_size": 65536 00:09:37.279 }, 00:09:37.279 { 00:09:37.279 "name": null, 00:09:37.279 "uuid": "4588f574-ae70-4876-a8b9-fe5f53514844", 00:09:37.279 "is_configured": false, 00:09:37.279 "data_offset": 0, 00:09:37.279 "data_size": 65536 00:09:37.279 }, 00:09:37.279 { 00:09:37.279 "name": null, 00:09:37.279 "uuid": "34112ac3-0282-454d-86d6-4ab5cae8e794", 00:09:37.279 "is_configured": false, 00:09:37.279 "data_offset": 0, 00:09:37.279 "data_size": 65536 00:09:37.279 } 00:09:37.279 ] 00:09:37.279 }' 00:09:37.279 15:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.279 15:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.538 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.538 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:37.538 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.538 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.538 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.798 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:37.798 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:37.798 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.798 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.798 [2024-11-25 15:36:36.226221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:37.798 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.798 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.798 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.798 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.798 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.798 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.798 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.798 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.798 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.798 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.798 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.798 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.798 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.798 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.798 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.799 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.799 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.799 "name": "Existed_Raid", 00:09:37.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.799 "strip_size_kb": 64, 00:09:37.799 "state": "configuring", 00:09:37.799 "raid_level": "concat", 00:09:37.799 "superblock": false, 00:09:37.799 "num_base_bdevs": 3, 00:09:37.799 "num_base_bdevs_discovered": 2, 00:09:37.799 "num_base_bdevs_operational": 3, 00:09:37.799 "base_bdevs_list": [ 00:09:37.799 { 00:09:37.799 "name": "BaseBdev1", 00:09:37.799 "uuid": "50097000-08fc-4911-8597-831522102d84", 00:09:37.799 "is_configured": true, 00:09:37.799 "data_offset": 0, 00:09:37.799 "data_size": 65536 00:09:37.799 }, 00:09:37.799 { 00:09:37.799 "name": null, 00:09:37.799 "uuid": "4588f574-ae70-4876-a8b9-fe5f53514844", 00:09:37.799 "is_configured": false, 00:09:37.799 "data_offset": 0, 00:09:37.799 "data_size": 65536 00:09:37.799 }, 00:09:37.799 { 00:09:37.799 "name": "BaseBdev3", 00:09:37.799 "uuid": "34112ac3-0282-454d-86d6-4ab5cae8e794", 00:09:37.799 "is_configured": true, 00:09:37.799 "data_offset": 0, 00:09:37.799 "data_size": 65536 00:09:37.799 } 00:09:37.799 ] 00:09:37.799 }' 00:09:37.799 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.799 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.059 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.059 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:38.059 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.059 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.059 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.059 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:38.059 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:38.059 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.059 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.059 [2024-11-25 15:36:36.681420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:38.319 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.319 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.319 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.319 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.319 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.319 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.319 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.319 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.319 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.319 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.319 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.319 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.319 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.319 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.319 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.319 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.319 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.319 "name": "Existed_Raid", 00:09:38.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.319 "strip_size_kb": 64, 00:09:38.319 "state": "configuring", 00:09:38.319 "raid_level": "concat", 00:09:38.319 "superblock": false, 00:09:38.319 "num_base_bdevs": 3, 00:09:38.319 "num_base_bdevs_discovered": 1, 00:09:38.319 "num_base_bdevs_operational": 3, 00:09:38.319 "base_bdevs_list": [ 00:09:38.319 { 00:09:38.319 "name": null, 00:09:38.319 "uuid": "50097000-08fc-4911-8597-831522102d84", 00:09:38.319 "is_configured": false, 00:09:38.319 "data_offset": 0, 00:09:38.319 "data_size": 65536 00:09:38.319 }, 00:09:38.319 { 00:09:38.319 "name": null, 00:09:38.319 "uuid": "4588f574-ae70-4876-a8b9-fe5f53514844", 00:09:38.319 "is_configured": false, 00:09:38.319 "data_offset": 0, 00:09:38.319 "data_size": 65536 00:09:38.319 }, 00:09:38.319 { 00:09:38.319 "name": "BaseBdev3", 00:09:38.319 "uuid": "34112ac3-0282-454d-86d6-4ab5cae8e794", 00:09:38.319 "is_configured": true, 00:09:38.319 "data_offset": 0, 00:09:38.319 "data_size": 65536 00:09:38.319 } 00:09:38.319 ] 00:09:38.319 }' 00:09:38.319 15:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.319 15:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.579 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.579 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:38.579 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.579 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.579 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.579 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:38.579 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:38.579 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.579 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.579 [2024-11-25 15:36:37.255094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.839 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.839 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.839 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.839 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.839 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.839 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.839 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.839 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.839 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.839 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.839 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.839 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.839 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.839 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.839 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.839 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.839 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.839 "name": "Existed_Raid", 00:09:38.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.839 "strip_size_kb": 64, 00:09:38.839 "state": "configuring", 00:09:38.839 "raid_level": "concat", 00:09:38.839 "superblock": false, 00:09:38.839 "num_base_bdevs": 3, 00:09:38.839 "num_base_bdevs_discovered": 2, 00:09:38.839 "num_base_bdevs_operational": 3, 00:09:38.839 "base_bdevs_list": [ 00:09:38.839 { 00:09:38.839 "name": null, 00:09:38.839 "uuid": "50097000-08fc-4911-8597-831522102d84", 00:09:38.839 "is_configured": false, 00:09:38.839 "data_offset": 0, 00:09:38.839 "data_size": 65536 00:09:38.839 }, 00:09:38.839 { 00:09:38.839 "name": "BaseBdev2", 00:09:38.839 "uuid": "4588f574-ae70-4876-a8b9-fe5f53514844", 00:09:38.839 "is_configured": true, 00:09:38.839 "data_offset": 0, 00:09:38.839 "data_size": 65536 00:09:38.839 }, 00:09:38.839 { 00:09:38.839 "name": "BaseBdev3", 00:09:38.839 "uuid": "34112ac3-0282-454d-86d6-4ab5cae8e794", 00:09:38.839 "is_configured": true, 00:09:38.839 "data_offset": 0, 00:09:38.839 "data_size": 65536 00:09:38.839 } 00:09:38.839 ] 00:09:38.839 }' 00:09:38.839 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.839 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.099 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:39.099 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.099 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.099 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.099 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.099 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:39.099 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.099 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.099 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.099 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:39.099 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.099 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 50097000-08fc-4911-8597-831522102d84 00:09:39.099 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.099 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.360 [2024-11-25 15:36:37.810373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:39.360 [2024-11-25 15:36:37.810494] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:39.360 [2024-11-25 15:36:37.810524] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:39.360 [2024-11-25 15:36:37.810803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:39.360 [2024-11-25 15:36:37.811004] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:39.360 [2024-11-25 15:36:37.811067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:39.360 [2024-11-25 15:36:37.811335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.360 NewBaseBdev 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.360 [ 00:09:39.360 { 00:09:39.360 "name": "NewBaseBdev", 00:09:39.360 "aliases": [ 00:09:39.360 "50097000-08fc-4911-8597-831522102d84" 00:09:39.360 ], 00:09:39.360 "product_name": "Malloc disk", 00:09:39.360 "block_size": 512, 00:09:39.360 "num_blocks": 65536, 00:09:39.360 "uuid": "50097000-08fc-4911-8597-831522102d84", 00:09:39.360 "assigned_rate_limits": { 00:09:39.360 "rw_ios_per_sec": 0, 00:09:39.360 "rw_mbytes_per_sec": 0, 00:09:39.360 "r_mbytes_per_sec": 0, 00:09:39.360 "w_mbytes_per_sec": 0 00:09:39.360 }, 00:09:39.360 "claimed": true, 00:09:39.360 "claim_type": "exclusive_write", 00:09:39.360 "zoned": false, 00:09:39.360 "supported_io_types": { 00:09:39.360 "read": true, 00:09:39.360 "write": true, 00:09:39.360 "unmap": true, 00:09:39.360 "flush": true, 00:09:39.360 "reset": true, 00:09:39.360 "nvme_admin": false, 00:09:39.360 "nvme_io": false, 00:09:39.360 "nvme_io_md": false, 00:09:39.360 "write_zeroes": true, 00:09:39.360 "zcopy": true, 00:09:39.360 "get_zone_info": false, 00:09:39.360 "zone_management": false, 00:09:39.360 "zone_append": false, 00:09:39.360 "compare": false, 00:09:39.360 "compare_and_write": false, 00:09:39.360 "abort": true, 00:09:39.360 "seek_hole": false, 00:09:39.360 "seek_data": false, 00:09:39.360 "copy": true, 00:09:39.360 "nvme_iov_md": false 00:09:39.360 }, 00:09:39.360 "memory_domains": [ 00:09:39.360 { 00:09:39.360 "dma_device_id": "system", 00:09:39.360 "dma_device_type": 1 00:09:39.360 }, 00:09:39.360 { 00:09:39.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.360 "dma_device_type": 2 00:09:39.360 } 00:09:39.360 ], 00:09:39.360 "driver_specific": {} 00:09:39.360 } 00:09:39.360 ] 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.360 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.361 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.361 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.361 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.361 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.361 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.361 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.361 "name": "Existed_Raid", 00:09:39.361 "uuid": "b9792ebc-e305-4c14-b1f0-9c3887151377", 00:09:39.361 "strip_size_kb": 64, 00:09:39.361 "state": "online", 00:09:39.361 "raid_level": "concat", 00:09:39.361 "superblock": false, 00:09:39.361 "num_base_bdevs": 3, 00:09:39.361 "num_base_bdevs_discovered": 3, 00:09:39.361 "num_base_bdevs_operational": 3, 00:09:39.361 "base_bdevs_list": [ 00:09:39.361 { 00:09:39.361 "name": "NewBaseBdev", 00:09:39.361 "uuid": "50097000-08fc-4911-8597-831522102d84", 00:09:39.361 "is_configured": true, 00:09:39.361 "data_offset": 0, 00:09:39.361 "data_size": 65536 00:09:39.361 }, 00:09:39.361 { 00:09:39.361 "name": "BaseBdev2", 00:09:39.361 "uuid": "4588f574-ae70-4876-a8b9-fe5f53514844", 00:09:39.361 "is_configured": true, 00:09:39.361 "data_offset": 0, 00:09:39.361 "data_size": 65536 00:09:39.361 }, 00:09:39.361 { 00:09:39.361 "name": "BaseBdev3", 00:09:39.361 "uuid": "34112ac3-0282-454d-86d6-4ab5cae8e794", 00:09:39.361 "is_configured": true, 00:09:39.361 "data_offset": 0, 00:09:39.361 "data_size": 65536 00:09:39.361 } 00:09:39.361 ] 00:09:39.361 }' 00:09:39.361 15:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.361 15:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.621 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:39.621 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:39.621 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:39.621 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:39.621 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:39.621 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:39.621 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:39.621 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:39.621 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.621 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.621 [2024-11-25 15:36:38.277946] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.621 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.881 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.881 "name": "Existed_Raid", 00:09:39.881 "aliases": [ 00:09:39.881 "b9792ebc-e305-4c14-b1f0-9c3887151377" 00:09:39.881 ], 00:09:39.881 "product_name": "Raid Volume", 00:09:39.881 "block_size": 512, 00:09:39.881 "num_blocks": 196608, 00:09:39.881 "uuid": "b9792ebc-e305-4c14-b1f0-9c3887151377", 00:09:39.881 "assigned_rate_limits": { 00:09:39.881 "rw_ios_per_sec": 0, 00:09:39.881 "rw_mbytes_per_sec": 0, 00:09:39.881 "r_mbytes_per_sec": 0, 00:09:39.881 "w_mbytes_per_sec": 0 00:09:39.881 }, 00:09:39.881 "claimed": false, 00:09:39.881 "zoned": false, 00:09:39.881 "supported_io_types": { 00:09:39.881 "read": true, 00:09:39.881 "write": true, 00:09:39.881 "unmap": true, 00:09:39.881 "flush": true, 00:09:39.881 "reset": true, 00:09:39.881 "nvme_admin": false, 00:09:39.881 "nvme_io": false, 00:09:39.881 "nvme_io_md": false, 00:09:39.881 "write_zeroes": true, 00:09:39.881 "zcopy": false, 00:09:39.881 "get_zone_info": false, 00:09:39.881 "zone_management": false, 00:09:39.881 "zone_append": false, 00:09:39.881 "compare": false, 00:09:39.881 "compare_and_write": false, 00:09:39.881 "abort": false, 00:09:39.881 "seek_hole": false, 00:09:39.881 "seek_data": false, 00:09:39.881 "copy": false, 00:09:39.881 "nvme_iov_md": false 00:09:39.881 }, 00:09:39.881 "memory_domains": [ 00:09:39.881 { 00:09:39.881 "dma_device_id": "system", 00:09:39.881 "dma_device_type": 1 00:09:39.881 }, 00:09:39.881 { 00:09:39.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.881 "dma_device_type": 2 00:09:39.881 }, 00:09:39.881 { 00:09:39.881 "dma_device_id": "system", 00:09:39.881 "dma_device_type": 1 00:09:39.881 }, 00:09:39.881 { 00:09:39.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.881 "dma_device_type": 2 00:09:39.881 }, 00:09:39.881 { 00:09:39.881 "dma_device_id": "system", 00:09:39.881 "dma_device_type": 1 00:09:39.881 }, 00:09:39.881 { 00:09:39.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.881 "dma_device_type": 2 00:09:39.881 } 00:09:39.881 ], 00:09:39.881 "driver_specific": { 00:09:39.881 "raid": { 00:09:39.881 "uuid": "b9792ebc-e305-4c14-b1f0-9c3887151377", 00:09:39.881 "strip_size_kb": 64, 00:09:39.881 "state": "online", 00:09:39.881 "raid_level": "concat", 00:09:39.881 "superblock": false, 00:09:39.881 "num_base_bdevs": 3, 00:09:39.881 "num_base_bdevs_discovered": 3, 00:09:39.881 "num_base_bdevs_operational": 3, 00:09:39.881 "base_bdevs_list": [ 00:09:39.881 { 00:09:39.881 "name": "NewBaseBdev", 00:09:39.881 "uuid": "50097000-08fc-4911-8597-831522102d84", 00:09:39.881 "is_configured": true, 00:09:39.881 "data_offset": 0, 00:09:39.881 "data_size": 65536 00:09:39.881 }, 00:09:39.881 { 00:09:39.881 "name": "BaseBdev2", 00:09:39.881 "uuid": "4588f574-ae70-4876-a8b9-fe5f53514844", 00:09:39.881 "is_configured": true, 00:09:39.881 "data_offset": 0, 00:09:39.881 "data_size": 65536 00:09:39.881 }, 00:09:39.881 { 00:09:39.881 "name": "BaseBdev3", 00:09:39.881 "uuid": "34112ac3-0282-454d-86d6-4ab5cae8e794", 00:09:39.881 "is_configured": true, 00:09:39.881 "data_offset": 0, 00:09:39.881 "data_size": 65536 00:09:39.882 } 00:09:39.882 ] 00:09:39.882 } 00:09:39.882 } 00:09:39.882 }' 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:39.882 BaseBdev2 00:09:39.882 BaseBdev3' 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.882 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.142 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.142 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.142 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.142 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.142 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.142 [2024-11-25 15:36:38.581106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.142 [2024-11-25 15:36:38.581171] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.142 [2024-11-25 15:36:38.581265] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.142 [2024-11-25 15:36:38.581337] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.142 [2024-11-25 15:36:38.581389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:40.142 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.142 15:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65393 00:09:40.142 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65393 ']' 00:09:40.142 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65393 00:09:40.142 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:40.142 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.142 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65393 00:09:40.142 killing process with pid 65393 00:09:40.142 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.142 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.142 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65393' 00:09:40.142 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65393 00:09:40.142 [2024-11-25 15:36:38.630023] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.142 15:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65393 00:09:40.403 [2024-11-25 15:36:38.920734] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:41.377 15:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:41.377 00:09:41.377 real 0m10.575s 00:09:41.377 user 0m16.838s 00:09:41.377 sys 0m1.848s 00:09:41.377 ************************************ 00:09:41.377 END TEST raid_state_function_test 00:09:41.377 ************************************ 00:09:41.377 15:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.377 15:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.637 15:36:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:41.637 15:36:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:41.637 15:36:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.637 15:36:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:41.637 ************************************ 00:09:41.637 START TEST raid_state_function_test_sb 00:09:41.637 ************************************ 00:09:41.637 15:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:41.637 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:41.637 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:41.637 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:41.637 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:41.637 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:41.637 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:41.637 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:41.637 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:41.638 Process raid pid: 66014 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66014 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66014' 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66014 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66014 ']' 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.638 15:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.638 [2024-11-25 15:36:40.176651] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:09:41.638 [2024-11-25 15:36:40.176850] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.898 [2024-11-25 15:36:40.349976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.898 [2024-11-25 15:36:40.465028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.158 [2024-11-25 15:36:40.666535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.158 [2024-11-25 15:36:40.666579] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.419 15:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.419 15:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:42.419 15:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:42.419 15:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.419 15:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.419 [2024-11-25 15:36:41.003371] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:42.419 [2024-11-25 15:36:41.003476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:42.419 [2024-11-25 15:36:41.003511] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:42.419 [2024-11-25 15:36:41.003559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:42.419 [2024-11-25 15:36:41.003609] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:42.419 [2024-11-25 15:36:41.003635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:42.419 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.419 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:42.419 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.419 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.419 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.419 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.419 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.419 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.419 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.419 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.419 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.419 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.419 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.419 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.419 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.419 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.419 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.420 "name": "Existed_Raid", 00:09:42.420 "uuid": "24d58845-626b-49f1-ace2-16bc400b8703", 00:09:42.420 "strip_size_kb": 64, 00:09:42.420 "state": "configuring", 00:09:42.420 "raid_level": "concat", 00:09:42.420 "superblock": true, 00:09:42.420 "num_base_bdevs": 3, 00:09:42.420 "num_base_bdevs_discovered": 0, 00:09:42.420 "num_base_bdevs_operational": 3, 00:09:42.420 "base_bdevs_list": [ 00:09:42.420 { 00:09:42.420 "name": "BaseBdev1", 00:09:42.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.420 "is_configured": false, 00:09:42.420 "data_offset": 0, 00:09:42.420 "data_size": 0 00:09:42.420 }, 00:09:42.420 { 00:09:42.420 "name": "BaseBdev2", 00:09:42.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.420 "is_configured": false, 00:09:42.420 "data_offset": 0, 00:09:42.420 "data_size": 0 00:09:42.420 }, 00:09:42.420 { 00:09:42.420 "name": "BaseBdev3", 00:09:42.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.420 "is_configured": false, 00:09:42.420 "data_offset": 0, 00:09:42.420 "data_size": 0 00:09:42.420 } 00:09:42.420 ] 00:09:42.420 }' 00:09:42.420 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.420 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.991 [2024-11-25 15:36:41.442575] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:42.991 [2024-11-25 15:36:41.442654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.991 [2024-11-25 15:36:41.454573] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:42.991 [2024-11-25 15:36:41.454620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:42.991 [2024-11-25 15:36:41.454629] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:42.991 [2024-11-25 15:36:41.454638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:42.991 [2024-11-25 15:36:41.454644] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:42.991 [2024-11-25 15:36:41.454653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.991 [2024-11-25 15:36:41.502462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.991 BaseBdev1 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.991 [ 00:09:42.991 { 00:09:42.991 "name": "BaseBdev1", 00:09:42.991 "aliases": [ 00:09:42.991 "957b13d0-db54-4db6-bd3d-41fda842ef3a" 00:09:42.991 ], 00:09:42.991 "product_name": "Malloc disk", 00:09:42.991 "block_size": 512, 00:09:42.991 "num_blocks": 65536, 00:09:42.991 "uuid": "957b13d0-db54-4db6-bd3d-41fda842ef3a", 00:09:42.991 "assigned_rate_limits": { 00:09:42.991 "rw_ios_per_sec": 0, 00:09:42.991 "rw_mbytes_per_sec": 0, 00:09:42.991 "r_mbytes_per_sec": 0, 00:09:42.991 "w_mbytes_per_sec": 0 00:09:42.991 }, 00:09:42.991 "claimed": true, 00:09:42.991 "claim_type": "exclusive_write", 00:09:42.991 "zoned": false, 00:09:42.991 "supported_io_types": { 00:09:42.991 "read": true, 00:09:42.991 "write": true, 00:09:42.991 "unmap": true, 00:09:42.991 "flush": true, 00:09:42.991 "reset": true, 00:09:42.991 "nvme_admin": false, 00:09:42.991 "nvme_io": false, 00:09:42.991 "nvme_io_md": false, 00:09:42.991 "write_zeroes": true, 00:09:42.991 "zcopy": true, 00:09:42.991 "get_zone_info": false, 00:09:42.991 "zone_management": false, 00:09:42.991 "zone_append": false, 00:09:42.991 "compare": false, 00:09:42.991 "compare_and_write": false, 00:09:42.991 "abort": true, 00:09:42.991 "seek_hole": false, 00:09:42.991 "seek_data": false, 00:09:42.991 "copy": true, 00:09:42.991 "nvme_iov_md": false 00:09:42.991 }, 00:09:42.991 "memory_domains": [ 00:09:42.991 { 00:09:42.991 "dma_device_id": "system", 00:09:42.991 "dma_device_type": 1 00:09:42.991 }, 00:09:42.991 { 00:09:42.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.991 "dma_device_type": 2 00:09:42.991 } 00:09:42.991 ], 00:09:42.991 "driver_specific": {} 00:09:42.991 } 00:09:42.991 ] 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.991 "name": "Existed_Raid", 00:09:42.991 "uuid": "67b37b1a-6a18-4b50-9004-1ee0cb48546e", 00:09:42.991 "strip_size_kb": 64, 00:09:42.991 "state": "configuring", 00:09:42.991 "raid_level": "concat", 00:09:42.991 "superblock": true, 00:09:42.991 "num_base_bdevs": 3, 00:09:42.991 "num_base_bdevs_discovered": 1, 00:09:42.991 "num_base_bdevs_operational": 3, 00:09:42.991 "base_bdevs_list": [ 00:09:42.991 { 00:09:42.991 "name": "BaseBdev1", 00:09:42.991 "uuid": "957b13d0-db54-4db6-bd3d-41fda842ef3a", 00:09:42.991 "is_configured": true, 00:09:42.991 "data_offset": 2048, 00:09:42.991 "data_size": 63488 00:09:42.991 }, 00:09:42.991 { 00:09:42.991 "name": "BaseBdev2", 00:09:42.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.991 "is_configured": false, 00:09:42.991 "data_offset": 0, 00:09:42.991 "data_size": 0 00:09:42.991 }, 00:09:42.991 { 00:09:42.991 "name": "BaseBdev3", 00:09:42.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.991 "is_configured": false, 00:09:42.991 "data_offset": 0, 00:09:42.991 "data_size": 0 00:09:42.991 } 00:09:42.991 ] 00:09:42.991 }' 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.991 15:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.561 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:43.561 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.561 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.561 [2024-11-25 15:36:42.017619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:43.561 [2024-11-25 15:36:42.017723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:43.561 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.561 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:43.561 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.561 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.562 [2024-11-25 15:36:42.029653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.562 [2024-11-25 15:36:42.031547] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:43.562 [2024-11-25 15:36:42.031627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:43.562 [2024-11-25 15:36:42.031657] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:43.562 [2024-11-25 15:36:42.031680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.562 "name": "Existed_Raid", 00:09:43.562 "uuid": "787bb613-074f-4910-b458-6849a5374c7c", 00:09:43.562 "strip_size_kb": 64, 00:09:43.562 "state": "configuring", 00:09:43.562 "raid_level": "concat", 00:09:43.562 "superblock": true, 00:09:43.562 "num_base_bdevs": 3, 00:09:43.562 "num_base_bdevs_discovered": 1, 00:09:43.562 "num_base_bdevs_operational": 3, 00:09:43.562 "base_bdevs_list": [ 00:09:43.562 { 00:09:43.562 "name": "BaseBdev1", 00:09:43.562 "uuid": "957b13d0-db54-4db6-bd3d-41fda842ef3a", 00:09:43.562 "is_configured": true, 00:09:43.562 "data_offset": 2048, 00:09:43.562 "data_size": 63488 00:09:43.562 }, 00:09:43.562 { 00:09:43.562 "name": "BaseBdev2", 00:09:43.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.562 "is_configured": false, 00:09:43.562 "data_offset": 0, 00:09:43.562 "data_size": 0 00:09:43.562 }, 00:09:43.562 { 00:09:43.562 "name": "BaseBdev3", 00:09:43.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.562 "is_configured": false, 00:09:43.562 "data_offset": 0, 00:09:43.562 "data_size": 0 00:09:43.562 } 00:09:43.562 ] 00:09:43.562 }' 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.562 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.132 [2024-11-25 15:36:42.558407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.132 BaseBdev2 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.132 [ 00:09:44.132 { 00:09:44.132 "name": "BaseBdev2", 00:09:44.132 "aliases": [ 00:09:44.132 "d8f9a844-fa80-45e9-b6ca-a44b6e34d731" 00:09:44.132 ], 00:09:44.132 "product_name": "Malloc disk", 00:09:44.132 "block_size": 512, 00:09:44.132 "num_blocks": 65536, 00:09:44.132 "uuid": "d8f9a844-fa80-45e9-b6ca-a44b6e34d731", 00:09:44.132 "assigned_rate_limits": { 00:09:44.132 "rw_ios_per_sec": 0, 00:09:44.132 "rw_mbytes_per_sec": 0, 00:09:44.132 "r_mbytes_per_sec": 0, 00:09:44.132 "w_mbytes_per_sec": 0 00:09:44.132 }, 00:09:44.132 "claimed": true, 00:09:44.132 "claim_type": "exclusive_write", 00:09:44.132 "zoned": false, 00:09:44.132 "supported_io_types": { 00:09:44.132 "read": true, 00:09:44.132 "write": true, 00:09:44.132 "unmap": true, 00:09:44.132 "flush": true, 00:09:44.132 "reset": true, 00:09:44.132 "nvme_admin": false, 00:09:44.132 "nvme_io": false, 00:09:44.132 "nvme_io_md": false, 00:09:44.132 "write_zeroes": true, 00:09:44.132 "zcopy": true, 00:09:44.132 "get_zone_info": false, 00:09:44.132 "zone_management": false, 00:09:44.132 "zone_append": false, 00:09:44.132 "compare": false, 00:09:44.132 "compare_and_write": false, 00:09:44.132 "abort": true, 00:09:44.132 "seek_hole": false, 00:09:44.132 "seek_data": false, 00:09:44.132 "copy": true, 00:09:44.132 "nvme_iov_md": false 00:09:44.132 }, 00:09:44.132 "memory_domains": [ 00:09:44.132 { 00:09:44.132 "dma_device_id": "system", 00:09:44.132 "dma_device_type": 1 00:09:44.132 }, 00:09:44.132 { 00:09:44.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.132 "dma_device_type": 2 00:09:44.132 } 00:09:44.132 ], 00:09:44.132 "driver_specific": {} 00:09:44.132 } 00:09:44.132 ] 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.132 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.133 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.133 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.133 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.133 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.133 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.133 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.133 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.133 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.133 "name": "Existed_Raid", 00:09:44.133 "uuid": "787bb613-074f-4910-b458-6849a5374c7c", 00:09:44.133 "strip_size_kb": 64, 00:09:44.133 "state": "configuring", 00:09:44.133 "raid_level": "concat", 00:09:44.133 "superblock": true, 00:09:44.133 "num_base_bdevs": 3, 00:09:44.133 "num_base_bdevs_discovered": 2, 00:09:44.133 "num_base_bdevs_operational": 3, 00:09:44.133 "base_bdevs_list": [ 00:09:44.133 { 00:09:44.133 "name": "BaseBdev1", 00:09:44.133 "uuid": "957b13d0-db54-4db6-bd3d-41fda842ef3a", 00:09:44.133 "is_configured": true, 00:09:44.133 "data_offset": 2048, 00:09:44.133 "data_size": 63488 00:09:44.133 }, 00:09:44.133 { 00:09:44.133 "name": "BaseBdev2", 00:09:44.133 "uuid": "d8f9a844-fa80-45e9-b6ca-a44b6e34d731", 00:09:44.133 "is_configured": true, 00:09:44.133 "data_offset": 2048, 00:09:44.133 "data_size": 63488 00:09:44.133 }, 00:09:44.133 { 00:09:44.133 "name": "BaseBdev3", 00:09:44.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.133 "is_configured": false, 00:09:44.133 "data_offset": 0, 00:09:44.133 "data_size": 0 00:09:44.133 } 00:09:44.133 ] 00:09:44.133 }' 00:09:44.133 15:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.133 15:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.393 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:44.393 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.393 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.653 [2024-11-25 15:36:43.098532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:44.653 [2024-11-25 15:36:43.098884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:44.653 [2024-11-25 15:36:43.098945] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:44.653 [2024-11-25 15:36:43.099247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:44.653 [2024-11-25 15:36:43.099450] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:44.653 [2024-11-25 15:36:43.099492] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:44.653 BaseBdev3 00:09:44.653 [2024-11-25 15:36:43.099665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.653 [ 00:09:44.653 { 00:09:44.653 "name": "BaseBdev3", 00:09:44.653 "aliases": [ 00:09:44.653 "63927a01-7ce7-4a3b-9fa2-f06d95ae6c14" 00:09:44.653 ], 00:09:44.653 "product_name": "Malloc disk", 00:09:44.653 "block_size": 512, 00:09:44.653 "num_blocks": 65536, 00:09:44.653 "uuid": "63927a01-7ce7-4a3b-9fa2-f06d95ae6c14", 00:09:44.653 "assigned_rate_limits": { 00:09:44.653 "rw_ios_per_sec": 0, 00:09:44.653 "rw_mbytes_per_sec": 0, 00:09:44.653 "r_mbytes_per_sec": 0, 00:09:44.653 "w_mbytes_per_sec": 0 00:09:44.653 }, 00:09:44.653 "claimed": true, 00:09:44.653 "claim_type": "exclusive_write", 00:09:44.653 "zoned": false, 00:09:44.653 "supported_io_types": { 00:09:44.653 "read": true, 00:09:44.653 "write": true, 00:09:44.653 "unmap": true, 00:09:44.653 "flush": true, 00:09:44.653 "reset": true, 00:09:44.653 "nvme_admin": false, 00:09:44.653 "nvme_io": false, 00:09:44.653 "nvme_io_md": false, 00:09:44.653 "write_zeroes": true, 00:09:44.653 "zcopy": true, 00:09:44.653 "get_zone_info": false, 00:09:44.653 "zone_management": false, 00:09:44.653 "zone_append": false, 00:09:44.653 "compare": false, 00:09:44.653 "compare_and_write": false, 00:09:44.653 "abort": true, 00:09:44.653 "seek_hole": false, 00:09:44.653 "seek_data": false, 00:09:44.653 "copy": true, 00:09:44.653 "nvme_iov_md": false 00:09:44.653 }, 00:09:44.653 "memory_domains": [ 00:09:44.653 { 00:09:44.653 "dma_device_id": "system", 00:09:44.653 "dma_device_type": 1 00:09:44.653 }, 00:09:44.653 { 00:09:44.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.653 "dma_device_type": 2 00:09:44.653 } 00:09:44.653 ], 00:09:44.653 "driver_specific": {} 00:09:44.653 } 00:09:44.653 ] 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.653 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.654 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.654 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.654 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.654 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.654 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.654 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.654 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.654 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.654 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.654 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.654 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.654 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.654 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.654 "name": "Existed_Raid", 00:09:44.654 "uuid": "787bb613-074f-4910-b458-6849a5374c7c", 00:09:44.654 "strip_size_kb": 64, 00:09:44.654 "state": "online", 00:09:44.654 "raid_level": "concat", 00:09:44.654 "superblock": true, 00:09:44.654 "num_base_bdevs": 3, 00:09:44.654 "num_base_bdevs_discovered": 3, 00:09:44.654 "num_base_bdevs_operational": 3, 00:09:44.654 "base_bdevs_list": [ 00:09:44.654 { 00:09:44.654 "name": "BaseBdev1", 00:09:44.654 "uuid": "957b13d0-db54-4db6-bd3d-41fda842ef3a", 00:09:44.654 "is_configured": true, 00:09:44.654 "data_offset": 2048, 00:09:44.654 "data_size": 63488 00:09:44.654 }, 00:09:44.654 { 00:09:44.654 "name": "BaseBdev2", 00:09:44.654 "uuid": "d8f9a844-fa80-45e9-b6ca-a44b6e34d731", 00:09:44.654 "is_configured": true, 00:09:44.654 "data_offset": 2048, 00:09:44.654 "data_size": 63488 00:09:44.654 }, 00:09:44.654 { 00:09:44.654 "name": "BaseBdev3", 00:09:44.654 "uuid": "63927a01-7ce7-4a3b-9fa2-f06d95ae6c14", 00:09:44.654 "is_configured": true, 00:09:44.654 "data_offset": 2048, 00:09:44.654 "data_size": 63488 00:09:44.654 } 00:09:44.654 ] 00:09:44.654 }' 00:09:44.654 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.654 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.914 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:44.914 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:44.914 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:44.914 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:44.914 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:44.914 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:44.914 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:44.914 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.914 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.914 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:44.914 [2024-11-25 15:36:43.550082] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.914 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.914 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:44.914 "name": "Existed_Raid", 00:09:44.914 "aliases": [ 00:09:44.914 "787bb613-074f-4910-b458-6849a5374c7c" 00:09:44.914 ], 00:09:44.914 "product_name": "Raid Volume", 00:09:44.914 "block_size": 512, 00:09:44.914 "num_blocks": 190464, 00:09:44.914 "uuid": "787bb613-074f-4910-b458-6849a5374c7c", 00:09:44.914 "assigned_rate_limits": { 00:09:44.914 "rw_ios_per_sec": 0, 00:09:44.914 "rw_mbytes_per_sec": 0, 00:09:44.914 "r_mbytes_per_sec": 0, 00:09:44.914 "w_mbytes_per_sec": 0 00:09:44.914 }, 00:09:44.914 "claimed": false, 00:09:44.914 "zoned": false, 00:09:44.914 "supported_io_types": { 00:09:44.914 "read": true, 00:09:44.914 "write": true, 00:09:44.914 "unmap": true, 00:09:44.914 "flush": true, 00:09:44.914 "reset": true, 00:09:44.914 "nvme_admin": false, 00:09:44.914 "nvme_io": false, 00:09:44.914 "nvme_io_md": false, 00:09:44.914 "write_zeroes": true, 00:09:44.914 "zcopy": false, 00:09:44.914 "get_zone_info": false, 00:09:44.914 "zone_management": false, 00:09:44.914 "zone_append": false, 00:09:44.914 "compare": false, 00:09:44.914 "compare_and_write": false, 00:09:44.914 "abort": false, 00:09:44.914 "seek_hole": false, 00:09:44.914 "seek_data": false, 00:09:44.914 "copy": false, 00:09:44.914 "nvme_iov_md": false 00:09:44.914 }, 00:09:44.914 "memory_domains": [ 00:09:44.914 { 00:09:44.914 "dma_device_id": "system", 00:09:44.914 "dma_device_type": 1 00:09:44.914 }, 00:09:44.914 { 00:09:44.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.914 "dma_device_type": 2 00:09:44.914 }, 00:09:44.914 { 00:09:44.914 "dma_device_id": "system", 00:09:44.914 "dma_device_type": 1 00:09:44.914 }, 00:09:44.914 { 00:09:44.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.914 "dma_device_type": 2 00:09:44.914 }, 00:09:44.914 { 00:09:44.914 "dma_device_id": "system", 00:09:44.914 "dma_device_type": 1 00:09:44.914 }, 00:09:44.914 { 00:09:44.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.914 "dma_device_type": 2 00:09:44.914 } 00:09:44.914 ], 00:09:44.914 "driver_specific": { 00:09:44.914 "raid": { 00:09:44.914 "uuid": "787bb613-074f-4910-b458-6849a5374c7c", 00:09:44.914 "strip_size_kb": 64, 00:09:44.914 "state": "online", 00:09:44.914 "raid_level": "concat", 00:09:44.914 "superblock": true, 00:09:44.914 "num_base_bdevs": 3, 00:09:44.914 "num_base_bdevs_discovered": 3, 00:09:44.914 "num_base_bdevs_operational": 3, 00:09:44.914 "base_bdevs_list": [ 00:09:44.914 { 00:09:44.914 "name": "BaseBdev1", 00:09:44.914 "uuid": "957b13d0-db54-4db6-bd3d-41fda842ef3a", 00:09:44.914 "is_configured": true, 00:09:44.914 "data_offset": 2048, 00:09:44.914 "data_size": 63488 00:09:44.914 }, 00:09:44.914 { 00:09:44.914 "name": "BaseBdev2", 00:09:44.914 "uuid": "d8f9a844-fa80-45e9-b6ca-a44b6e34d731", 00:09:44.914 "is_configured": true, 00:09:44.914 "data_offset": 2048, 00:09:44.914 "data_size": 63488 00:09:44.914 }, 00:09:44.914 { 00:09:44.914 "name": "BaseBdev3", 00:09:44.914 "uuid": "63927a01-7ce7-4a3b-9fa2-f06d95ae6c14", 00:09:44.914 "is_configured": true, 00:09:44.914 "data_offset": 2048, 00:09:44.914 "data_size": 63488 00:09:44.914 } 00:09:44.914 ] 00:09:44.914 } 00:09:44.914 } 00:09:44.914 }' 00:09:45.174 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:45.175 BaseBdev2 00:09:45.175 BaseBdev3' 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.175 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.175 [2024-11-25 15:36:43.849294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:45.175 [2024-11-25 15:36:43.849360] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.175 [2024-11-25 15:36:43.849434] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.435 "name": "Existed_Raid", 00:09:45.435 "uuid": "787bb613-074f-4910-b458-6849a5374c7c", 00:09:45.435 "strip_size_kb": 64, 00:09:45.435 "state": "offline", 00:09:45.435 "raid_level": "concat", 00:09:45.435 "superblock": true, 00:09:45.435 "num_base_bdevs": 3, 00:09:45.435 "num_base_bdevs_discovered": 2, 00:09:45.435 "num_base_bdevs_operational": 2, 00:09:45.435 "base_bdevs_list": [ 00:09:45.435 { 00:09:45.435 "name": null, 00:09:45.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.435 "is_configured": false, 00:09:45.435 "data_offset": 0, 00:09:45.435 "data_size": 63488 00:09:45.435 }, 00:09:45.435 { 00:09:45.435 "name": "BaseBdev2", 00:09:45.435 "uuid": "d8f9a844-fa80-45e9-b6ca-a44b6e34d731", 00:09:45.435 "is_configured": true, 00:09:45.435 "data_offset": 2048, 00:09:45.435 "data_size": 63488 00:09:45.435 }, 00:09:45.435 { 00:09:45.435 "name": "BaseBdev3", 00:09:45.435 "uuid": "63927a01-7ce7-4a3b-9fa2-f06d95ae6c14", 00:09:45.435 "is_configured": true, 00:09:45.435 "data_offset": 2048, 00:09:45.435 "data_size": 63488 00:09:45.435 } 00:09:45.435 ] 00:09:45.435 }' 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.435 15:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.696 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:45.696 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:45.696 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.696 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.696 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.957 [2024-11-25 15:36:44.427867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.957 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.957 [2024-11-25 15:36:44.577178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:45.957 [2024-11-25 15:36:44.577270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.218 BaseBdev2 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.218 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.218 [ 00:09:46.218 { 00:09:46.218 "name": "BaseBdev2", 00:09:46.218 "aliases": [ 00:09:46.218 "4af04978-a99a-4a99-afec-a428131b2043" 00:09:46.218 ], 00:09:46.218 "product_name": "Malloc disk", 00:09:46.218 "block_size": 512, 00:09:46.218 "num_blocks": 65536, 00:09:46.218 "uuid": "4af04978-a99a-4a99-afec-a428131b2043", 00:09:46.218 "assigned_rate_limits": { 00:09:46.218 "rw_ios_per_sec": 0, 00:09:46.218 "rw_mbytes_per_sec": 0, 00:09:46.218 "r_mbytes_per_sec": 0, 00:09:46.218 "w_mbytes_per_sec": 0 00:09:46.218 }, 00:09:46.218 "claimed": false, 00:09:46.218 "zoned": false, 00:09:46.218 "supported_io_types": { 00:09:46.218 "read": true, 00:09:46.218 "write": true, 00:09:46.218 "unmap": true, 00:09:46.218 "flush": true, 00:09:46.218 "reset": true, 00:09:46.218 "nvme_admin": false, 00:09:46.218 "nvme_io": false, 00:09:46.218 "nvme_io_md": false, 00:09:46.218 "write_zeroes": true, 00:09:46.218 "zcopy": true, 00:09:46.218 "get_zone_info": false, 00:09:46.218 "zone_management": false, 00:09:46.218 "zone_append": false, 00:09:46.218 "compare": false, 00:09:46.218 "compare_and_write": false, 00:09:46.218 "abort": true, 00:09:46.218 "seek_hole": false, 00:09:46.219 "seek_data": false, 00:09:46.219 "copy": true, 00:09:46.219 "nvme_iov_md": false 00:09:46.219 }, 00:09:46.219 "memory_domains": [ 00:09:46.219 { 00:09:46.219 "dma_device_id": "system", 00:09:46.219 "dma_device_type": 1 00:09:46.219 }, 00:09:46.219 { 00:09:46.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.219 "dma_device_type": 2 00:09:46.219 } 00:09:46.219 ], 00:09:46.219 "driver_specific": {} 00:09:46.219 } 00:09:46.219 ] 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.219 BaseBdev3 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.219 [ 00:09:46.219 { 00:09:46.219 "name": "BaseBdev3", 00:09:46.219 "aliases": [ 00:09:46.219 "5ff7bf4d-0caa-43f2-bd8b-10192b937457" 00:09:46.219 ], 00:09:46.219 "product_name": "Malloc disk", 00:09:46.219 "block_size": 512, 00:09:46.219 "num_blocks": 65536, 00:09:46.219 "uuid": "5ff7bf4d-0caa-43f2-bd8b-10192b937457", 00:09:46.219 "assigned_rate_limits": { 00:09:46.219 "rw_ios_per_sec": 0, 00:09:46.219 "rw_mbytes_per_sec": 0, 00:09:46.219 "r_mbytes_per_sec": 0, 00:09:46.219 "w_mbytes_per_sec": 0 00:09:46.219 }, 00:09:46.219 "claimed": false, 00:09:46.219 "zoned": false, 00:09:46.219 "supported_io_types": { 00:09:46.219 "read": true, 00:09:46.219 "write": true, 00:09:46.219 "unmap": true, 00:09:46.219 "flush": true, 00:09:46.219 "reset": true, 00:09:46.219 "nvme_admin": false, 00:09:46.219 "nvme_io": false, 00:09:46.219 "nvme_io_md": false, 00:09:46.219 "write_zeroes": true, 00:09:46.219 "zcopy": true, 00:09:46.219 "get_zone_info": false, 00:09:46.219 "zone_management": false, 00:09:46.219 "zone_append": false, 00:09:46.219 "compare": false, 00:09:46.219 "compare_and_write": false, 00:09:46.219 "abort": true, 00:09:46.219 "seek_hole": false, 00:09:46.219 "seek_data": false, 00:09:46.219 "copy": true, 00:09:46.219 "nvme_iov_md": false 00:09:46.219 }, 00:09:46.219 "memory_domains": [ 00:09:46.219 { 00:09:46.219 "dma_device_id": "system", 00:09:46.219 "dma_device_type": 1 00:09:46.219 }, 00:09:46.219 { 00:09:46.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.219 "dma_device_type": 2 00:09:46.219 } 00:09:46.219 ], 00:09:46.219 "driver_specific": {} 00:09:46.219 } 00:09:46.219 ] 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.219 [2024-11-25 15:36:44.891754] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:46.219 [2024-11-25 15:36:44.891836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:46.219 [2024-11-25 15:36:44.891876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.219 [2024-11-25 15:36:44.893593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.219 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.480 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.480 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.480 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.480 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.480 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.480 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.480 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.480 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.480 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.480 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.480 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.480 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.480 "name": "Existed_Raid", 00:09:46.480 "uuid": "592df81e-0533-48e9-864b-fe30782b5f73", 00:09:46.480 "strip_size_kb": 64, 00:09:46.480 "state": "configuring", 00:09:46.480 "raid_level": "concat", 00:09:46.480 "superblock": true, 00:09:46.480 "num_base_bdevs": 3, 00:09:46.480 "num_base_bdevs_discovered": 2, 00:09:46.480 "num_base_bdevs_operational": 3, 00:09:46.480 "base_bdevs_list": [ 00:09:46.480 { 00:09:46.480 "name": "BaseBdev1", 00:09:46.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.480 "is_configured": false, 00:09:46.480 "data_offset": 0, 00:09:46.480 "data_size": 0 00:09:46.480 }, 00:09:46.480 { 00:09:46.480 "name": "BaseBdev2", 00:09:46.480 "uuid": "4af04978-a99a-4a99-afec-a428131b2043", 00:09:46.480 "is_configured": true, 00:09:46.480 "data_offset": 2048, 00:09:46.480 "data_size": 63488 00:09:46.480 }, 00:09:46.480 { 00:09:46.480 "name": "BaseBdev3", 00:09:46.480 "uuid": "5ff7bf4d-0caa-43f2-bd8b-10192b937457", 00:09:46.480 "is_configured": true, 00:09:46.480 "data_offset": 2048, 00:09:46.480 "data_size": 63488 00:09:46.480 } 00:09:46.480 ] 00:09:46.480 }' 00:09:46.480 15:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.480 15:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.740 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:46.740 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.740 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.740 [2024-11-25 15:36:45.326995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:46.740 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.741 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:46.741 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.741 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.741 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.741 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.741 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.741 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.741 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.741 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.741 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.741 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.741 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.741 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.741 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.741 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.741 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.741 "name": "Existed_Raid", 00:09:46.741 "uuid": "592df81e-0533-48e9-864b-fe30782b5f73", 00:09:46.741 "strip_size_kb": 64, 00:09:46.741 "state": "configuring", 00:09:46.741 "raid_level": "concat", 00:09:46.741 "superblock": true, 00:09:46.741 "num_base_bdevs": 3, 00:09:46.741 "num_base_bdevs_discovered": 1, 00:09:46.741 "num_base_bdevs_operational": 3, 00:09:46.741 "base_bdevs_list": [ 00:09:46.741 { 00:09:46.741 "name": "BaseBdev1", 00:09:46.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.741 "is_configured": false, 00:09:46.741 "data_offset": 0, 00:09:46.741 "data_size": 0 00:09:46.741 }, 00:09:46.741 { 00:09:46.741 "name": null, 00:09:46.741 "uuid": "4af04978-a99a-4a99-afec-a428131b2043", 00:09:46.741 "is_configured": false, 00:09:46.741 "data_offset": 0, 00:09:46.741 "data_size": 63488 00:09:46.741 }, 00:09:46.741 { 00:09:46.741 "name": "BaseBdev3", 00:09:46.741 "uuid": "5ff7bf4d-0caa-43f2-bd8b-10192b937457", 00:09:46.741 "is_configured": true, 00:09:46.741 "data_offset": 2048, 00:09:46.741 "data_size": 63488 00:09:46.741 } 00:09:46.741 ] 00:09:46.741 }' 00:09:46.741 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.741 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.311 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.311 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.312 [2024-11-25 15:36:45.847393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.312 BaseBdev1 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.312 [ 00:09:47.312 { 00:09:47.312 "name": "BaseBdev1", 00:09:47.312 "aliases": [ 00:09:47.312 "fa133912-2536-42e9-929f-bbba10ea0b51" 00:09:47.312 ], 00:09:47.312 "product_name": "Malloc disk", 00:09:47.312 "block_size": 512, 00:09:47.312 "num_blocks": 65536, 00:09:47.312 "uuid": "fa133912-2536-42e9-929f-bbba10ea0b51", 00:09:47.312 "assigned_rate_limits": { 00:09:47.312 "rw_ios_per_sec": 0, 00:09:47.312 "rw_mbytes_per_sec": 0, 00:09:47.312 "r_mbytes_per_sec": 0, 00:09:47.312 "w_mbytes_per_sec": 0 00:09:47.312 }, 00:09:47.312 "claimed": true, 00:09:47.312 "claim_type": "exclusive_write", 00:09:47.312 "zoned": false, 00:09:47.312 "supported_io_types": { 00:09:47.312 "read": true, 00:09:47.312 "write": true, 00:09:47.312 "unmap": true, 00:09:47.312 "flush": true, 00:09:47.312 "reset": true, 00:09:47.312 "nvme_admin": false, 00:09:47.312 "nvme_io": false, 00:09:47.312 "nvme_io_md": false, 00:09:47.312 "write_zeroes": true, 00:09:47.312 "zcopy": true, 00:09:47.312 "get_zone_info": false, 00:09:47.312 "zone_management": false, 00:09:47.312 "zone_append": false, 00:09:47.312 "compare": false, 00:09:47.312 "compare_and_write": false, 00:09:47.312 "abort": true, 00:09:47.312 "seek_hole": false, 00:09:47.312 "seek_data": false, 00:09:47.312 "copy": true, 00:09:47.312 "nvme_iov_md": false 00:09:47.312 }, 00:09:47.312 "memory_domains": [ 00:09:47.312 { 00:09:47.312 "dma_device_id": "system", 00:09:47.312 "dma_device_type": 1 00:09:47.312 }, 00:09:47.312 { 00:09:47.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.312 "dma_device_type": 2 00:09:47.312 } 00:09:47.312 ], 00:09:47.312 "driver_specific": {} 00:09:47.312 } 00:09:47.312 ] 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.312 "name": "Existed_Raid", 00:09:47.312 "uuid": "592df81e-0533-48e9-864b-fe30782b5f73", 00:09:47.312 "strip_size_kb": 64, 00:09:47.312 "state": "configuring", 00:09:47.312 "raid_level": "concat", 00:09:47.312 "superblock": true, 00:09:47.312 "num_base_bdevs": 3, 00:09:47.312 "num_base_bdevs_discovered": 2, 00:09:47.312 "num_base_bdevs_operational": 3, 00:09:47.312 "base_bdevs_list": [ 00:09:47.312 { 00:09:47.312 "name": "BaseBdev1", 00:09:47.312 "uuid": "fa133912-2536-42e9-929f-bbba10ea0b51", 00:09:47.312 "is_configured": true, 00:09:47.312 "data_offset": 2048, 00:09:47.312 "data_size": 63488 00:09:47.312 }, 00:09:47.312 { 00:09:47.312 "name": null, 00:09:47.312 "uuid": "4af04978-a99a-4a99-afec-a428131b2043", 00:09:47.312 "is_configured": false, 00:09:47.312 "data_offset": 0, 00:09:47.312 "data_size": 63488 00:09:47.312 }, 00:09:47.312 { 00:09:47.312 "name": "BaseBdev3", 00:09:47.312 "uuid": "5ff7bf4d-0caa-43f2-bd8b-10192b937457", 00:09:47.312 "is_configured": true, 00:09:47.312 "data_offset": 2048, 00:09:47.312 "data_size": 63488 00:09:47.312 } 00:09:47.312 ] 00:09:47.312 }' 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.312 15:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.883 [2024-11-25 15:36:46.314601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.883 "name": "Existed_Raid", 00:09:47.883 "uuid": "592df81e-0533-48e9-864b-fe30782b5f73", 00:09:47.883 "strip_size_kb": 64, 00:09:47.883 "state": "configuring", 00:09:47.883 "raid_level": "concat", 00:09:47.883 "superblock": true, 00:09:47.883 "num_base_bdevs": 3, 00:09:47.883 "num_base_bdevs_discovered": 1, 00:09:47.883 "num_base_bdevs_operational": 3, 00:09:47.883 "base_bdevs_list": [ 00:09:47.883 { 00:09:47.883 "name": "BaseBdev1", 00:09:47.883 "uuid": "fa133912-2536-42e9-929f-bbba10ea0b51", 00:09:47.883 "is_configured": true, 00:09:47.883 "data_offset": 2048, 00:09:47.883 "data_size": 63488 00:09:47.883 }, 00:09:47.883 { 00:09:47.883 "name": null, 00:09:47.883 "uuid": "4af04978-a99a-4a99-afec-a428131b2043", 00:09:47.883 "is_configured": false, 00:09:47.883 "data_offset": 0, 00:09:47.883 "data_size": 63488 00:09:47.883 }, 00:09:47.883 { 00:09:47.883 "name": null, 00:09:47.883 "uuid": "5ff7bf4d-0caa-43f2-bd8b-10192b937457", 00:09:47.883 "is_configured": false, 00:09:47.883 "data_offset": 0, 00:09:47.883 "data_size": 63488 00:09:47.883 } 00:09:47.883 ] 00:09:47.883 }' 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.883 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.144 [2024-11-25 15:36:46.809863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.144 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.404 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.404 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.404 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.404 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.404 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.404 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.404 "name": "Existed_Raid", 00:09:48.405 "uuid": "592df81e-0533-48e9-864b-fe30782b5f73", 00:09:48.405 "strip_size_kb": 64, 00:09:48.405 "state": "configuring", 00:09:48.405 "raid_level": "concat", 00:09:48.405 "superblock": true, 00:09:48.405 "num_base_bdevs": 3, 00:09:48.405 "num_base_bdevs_discovered": 2, 00:09:48.405 "num_base_bdevs_operational": 3, 00:09:48.405 "base_bdevs_list": [ 00:09:48.405 { 00:09:48.405 "name": "BaseBdev1", 00:09:48.405 "uuid": "fa133912-2536-42e9-929f-bbba10ea0b51", 00:09:48.405 "is_configured": true, 00:09:48.405 "data_offset": 2048, 00:09:48.405 "data_size": 63488 00:09:48.405 }, 00:09:48.405 { 00:09:48.405 "name": null, 00:09:48.405 "uuid": "4af04978-a99a-4a99-afec-a428131b2043", 00:09:48.405 "is_configured": false, 00:09:48.405 "data_offset": 0, 00:09:48.405 "data_size": 63488 00:09:48.405 }, 00:09:48.405 { 00:09:48.405 "name": "BaseBdev3", 00:09:48.405 "uuid": "5ff7bf4d-0caa-43f2-bd8b-10192b937457", 00:09:48.405 "is_configured": true, 00:09:48.405 "data_offset": 2048, 00:09:48.405 "data_size": 63488 00:09:48.405 } 00:09:48.405 ] 00:09:48.405 }' 00:09:48.405 15:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.405 15:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.665 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:48.665 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.665 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.665 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.665 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.665 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:48.665 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:48.665 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.665 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.665 [2024-11-25 15:36:47.273070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:48.956 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.981 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:48.981 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.981 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.981 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.981 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.981 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.981 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.981 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.981 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.981 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.981 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.981 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.981 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.981 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.981 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.981 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.981 "name": "Existed_Raid", 00:09:48.981 "uuid": "592df81e-0533-48e9-864b-fe30782b5f73", 00:09:48.981 "strip_size_kb": 64, 00:09:48.981 "state": "configuring", 00:09:48.981 "raid_level": "concat", 00:09:48.981 "superblock": true, 00:09:48.981 "num_base_bdevs": 3, 00:09:48.981 "num_base_bdevs_discovered": 1, 00:09:48.981 "num_base_bdevs_operational": 3, 00:09:48.981 "base_bdevs_list": [ 00:09:48.981 { 00:09:48.981 "name": null, 00:09:48.981 "uuid": "fa133912-2536-42e9-929f-bbba10ea0b51", 00:09:48.981 "is_configured": false, 00:09:48.981 "data_offset": 0, 00:09:48.981 "data_size": 63488 00:09:48.981 }, 00:09:48.981 { 00:09:48.981 "name": null, 00:09:48.981 "uuid": "4af04978-a99a-4a99-afec-a428131b2043", 00:09:48.981 "is_configured": false, 00:09:48.981 "data_offset": 0, 00:09:48.981 "data_size": 63488 00:09:48.981 }, 00:09:48.981 { 00:09:48.981 "name": "BaseBdev3", 00:09:48.981 "uuid": "5ff7bf4d-0caa-43f2-bd8b-10192b937457", 00:09:48.981 "is_configured": true, 00:09:48.981 "data_offset": 2048, 00:09:48.981 "data_size": 63488 00:09:48.981 } 00:09:48.981 ] 00:09:48.981 }' 00:09:48.981 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.981 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.242 [2024-11-25 15:36:47.849313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.242 "name": "Existed_Raid", 00:09:49.242 "uuid": "592df81e-0533-48e9-864b-fe30782b5f73", 00:09:49.242 "strip_size_kb": 64, 00:09:49.242 "state": "configuring", 00:09:49.242 "raid_level": "concat", 00:09:49.242 "superblock": true, 00:09:49.242 "num_base_bdevs": 3, 00:09:49.242 "num_base_bdevs_discovered": 2, 00:09:49.242 "num_base_bdevs_operational": 3, 00:09:49.242 "base_bdevs_list": [ 00:09:49.242 { 00:09:49.242 "name": null, 00:09:49.242 "uuid": "fa133912-2536-42e9-929f-bbba10ea0b51", 00:09:49.242 "is_configured": false, 00:09:49.242 "data_offset": 0, 00:09:49.242 "data_size": 63488 00:09:49.242 }, 00:09:49.242 { 00:09:49.242 "name": "BaseBdev2", 00:09:49.242 "uuid": "4af04978-a99a-4a99-afec-a428131b2043", 00:09:49.242 "is_configured": true, 00:09:49.242 "data_offset": 2048, 00:09:49.242 "data_size": 63488 00:09:49.242 }, 00:09:49.242 { 00:09:49.242 "name": "BaseBdev3", 00:09:49.242 "uuid": "5ff7bf4d-0caa-43f2-bd8b-10192b937457", 00:09:49.242 "is_configured": true, 00:09:49.242 "data_offset": 2048, 00:09:49.242 "data_size": 63488 00:09:49.242 } 00:09:49.242 ] 00:09:49.242 }' 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.242 15:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fa133912-2536-42e9-929f-bbba10ea0b51 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.813 [2024-11-25 15:36:48.372828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:49.813 [2024-11-25 15:36:48.373128] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:49.813 [2024-11-25 15:36:48.373188] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:49.813 [2024-11-25 15:36:48.373446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:49.813 [2024-11-25 15:36:48.373616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:49.813 NewBaseBdev 00:09:49.813 [2024-11-25 15:36:48.373664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:49.813 [2024-11-25 15:36:48.373848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.813 [ 00:09:49.813 { 00:09:49.813 "name": "NewBaseBdev", 00:09:49.813 "aliases": [ 00:09:49.813 "fa133912-2536-42e9-929f-bbba10ea0b51" 00:09:49.813 ], 00:09:49.813 "product_name": "Malloc disk", 00:09:49.813 "block_size": 512, 00:09:49.813 "num_blocks": 65536, 00:09:49.813 "uuid": "fa133912-2536-42e9-929f-bbba10ea0b51", 00:09:49.813 "assigned_rate_limits": { 00:09:49.813 "rw_ios_per_sec": 0, 00:09:49.813 "rw_mbytes_per_sec": 0, 00:09:49.813 "r_mbytes_per_sec": 0, 00:09:49.813 "w_mbytes_per_sec": 0 00:09:49.813 }, 00:09:49.813 "claimed": true, 00:09:49.813 "claim_type": "exclusive_write", 00:09:49.813 "zoned": false, 00:09:49.813 "supported_io_types": { 00:09:49.813 "read": true, 00:09:49.813 "write": true, 00:09:49.813 "unmap": true, 00:09:49.813 "flush": true, 00:09:49.813 "reset": true, 00:09:49.813 "nvme_admin": false, 00:09:49.813 "nvme_io": false, 00:09:49.813 "nvme_io_md": false, 00:09:49.813 "write_zeroes": true, 00:09:49.813 "zcopy": true, 00:09:49.813 "get_zone_info": false, 00:09:49.813 "zone_management": false, 00:09:49.813 "zone_append": false, 00:09:49.813 "compare": false, 00:09:49.813 "compare_and_write": false, 00:09:49.813 "abort": true, 00:09:49.813 "seek_hole": false, 00:09:49.813 "seek_data": false, 00:09:49.813 "copy": true, 00:09:49.813 "nvme_iov_md": false 00:09:49.813 }, 00:09:49.813 "memory_domains": [ 00:09:49.813 { 00:09:49.813 "dma_device_id": "system", 00:09:49.813 "dma_device_type": 1 00:09:49.813 }, 00:09:49.813 { 00:09:49.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.813 "dma_device_type": 2 00:09:49.813 } 00:09:49.813 ], 00:09:49.813 "driver_specific": {} 00:09:49.813 } 00:09:49.813 ] 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.813 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.814 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.814 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.814 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.814 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.814 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.814 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.814 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.814 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.814 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.814 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.814 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.814 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.814 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.814 "name": "Existed_Raid", 00:09:49.814 "uuid": "592df81e-0533-48e9-864b-fe30782b5f73", 00:09:49.814 "strip_size_kb": 64, 00:09:49.814 "state": "online", 00:09:49.814 "raid_level": "concat", 00:09:49.814 "superblock": true, 00:09:49.814 "num_base_bdevs": 3, 00:09:49.814 "num_base_bdevs_discovered": 3, 00:09:49.814 "num_base_bdevs_operational": 3, 00:09:49.814 "base_bdevs_list": [ 00:09:49.814 { 00:09:49.814 "name": "NewBaseBdev", 00:09:49.814 "uuid": "fa133912-2536-42e9-929f-bbba10ea0b51", 00:09:49.814 "is_configured": true, 00:09:49.814 "data_offset": 2048, 00:09:49.814 "data_size": 63488 00:09:49.814 }, 00:09:49.814 { 00:09:49.814 "name": "BaseBdev2", 00:09:49.814 "uuid": "4af04978-a99a-4a99-afec-a428131b2043", 00:09:49.814 "is_configured": true, 00:09:49.814 "data_offset": 2048, 00:09:49.814 "data_size": 63488 00:09:49.814 }, 00:09:49.814 { 00:09:49.814 "name": "BaseBdev3", 00:09:49.814 "uuid": "5ff7bf4d-0caa-43f2-bd8b-10192b937457", 00:09:49.814 "is_configured": true, 00:09:49.814 "data_offset": 2048, 00:09:49.814 "data_size": 63488 00:09:49.814 } 00:09:49.814 ] 00:09:49.814 }' 00:09:49.814 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.814 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.384 [2024-11-25 15:36:48.860373] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:50.384 "name": "Existed_Raid", 00:09:50.384 "aliases": [ 00:09:50.384 "592df81e-0533-48e9-864b-fe30782b5f73" 00:09:50.384 ], 00:09:50.384 "product_name": "Raid Volume", 00:09:50.384 "block_size": 512, 00:09:50.384 "num_blocks": 190464, 00:09:50.384 "uuid": "592df81e-0533-48e9-864b-fe30782b5f73", 00:09:50.384 "assigned_rate_limits": { 00:09:50.384 "rw_ios_per_sec": 0, 00:09:50.384 "rw_mbytes_per_sec": 0, 00:09:50.384 "r_mbytes_per_sec": 0, 00:09:50.384 "w_mbytes_per_sec": 0 00:09:50.384 }, 00:09:50.384 "claimed": false, 00:09:50.384 "zoned": false, 00:09:50.384 "supported_io_types": { 00:09:50.384 "read": true, 00:09:50.384 "write": true, 00:09:50.384 "unmap": true, 00:09:50.384 "flush": true, 00:09:50.384 "reset": true, 00:09:50.384 "nvme_admin": false, 00:09:50.384 "nvme_io": false, 00:09:50.384 "nvme_io_md": false, 00:09:50.384 "write_zeroes": true, 00:09:50.384 "zcopy": false, 00:09:50.384 "get_zone_info": false, 00:09:50.384 "zone_management": false, 00:09:50.384 "zone_append": false, 00:09:50.384 "compare": false, 00:09:50.384 "compare_and_write": false, 00:09:50.384 "abort": false, 00:09:50.384 "seek_hole": false, 00:09:50.384 "seek_data": false, 00:09:50.384 "copy": false, 00:09:50.384 "nvme_iov_md": false 00:09:50.384 }, 00:09:50.384 "memory_domains": [ 00:09:50.384 { 00:09:50.384 "dma_device_id": "system", 00:09:50.384 "dma_device_type": 1 00:09:50.384 }, 00:09:50.384 { 00:09:50.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.384 "dma_device_type": 2 00:09:50.384 }, 00:09:50.384 { 00:09:50.384 "dma_device_id": "system", 00:09:50.384 "dma_device_type": 1 00:09:50.384 }, 00:09:50.384 { 00:09:50.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.384 "dma_device_type": 2 00:09:50.384 }, 00:09:50.384 { 00:09:50.384 "dma_device_id": "system", 00:09:50.384 "dma_device_type": 1 00:09:50.384 }, 00:09:50.384 { 00:09:50.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.384 "dma_device_type": 2 00:09:50.384 } 00:09:50.384 ], 00:09:50.384 "driver_specific": { 00:09:50.384 "raid": { 00:09:50.384 "uuid": "592df81e-0533-48e9-864b-fe30782b5f73", 00:09:50.384 "strip_size_kb": 64, 00:09:50.384 "state": "online", 00:09:50.384 "raid_level": "concat", 00:09:50.384 "superblock": true, 00:09:50.384 "num_base_bdevs": 3, 00:09:50.384 "num_base_bdevs_discovered": 3, 00:09:50.384 "num_base_bdevs_operational": 3, 00:09:50.384 "base_bdevs_list": [ 00:09:50.384 { 00:09:50.384 "name": "NewBaseBdev", 00:09:50.384 "uuid": "fa133912-2536-42e9-929f-bbba10ea0b51", 00:09:50.384 "is_configured": true, 00:09:50.384 "data_offset": 2048, 00:09:50.384 "data_size": 63488 00:09:50.384 }, 00:09:50.384 { 00:09:50.384 "name": "BaseBdev2", 00:09:50.384 "uuid": "4af04978-a99a-4a99-afec-a428131b2043", 00:09:50.384 "is_configured": true, 00:09:50.384 "data_offset": 2048, 00:09:50.384 "data_size": 63488 00:09:50.384 }, 00:09:50.384 { 00:09:50.384 "name": "BaseBdev3", 00:09:50.384 "uuid": "5ff7bf4d-0caa-43f2-bd8b-10192b937457", 00:09:50.384 "is_configured": true, 00:09:50.384 "data_offset": 2048, 00:09:50.384 "data_size": 63488 00:09:50.384 } 00:09:50.384 ] 00:09:50.384 } 00:09:50.384 } 00:09:50.384 }' 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:50.384 BaseBdev2 00:09:50.384 BaseBdev3' 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.384 15:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.384 15:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.384 15:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:50.384 15:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.384 15:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.384 15:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.384 15:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.384 15:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.384 15:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.384 15:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.384 15:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:50.384 15:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.384 15:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.384 15:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.644 15:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.644 15:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.644 15:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.644 15:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:50.644 15:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.644 15:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.644 [2024-11-25 15:36:49.111608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:50.644 [2024-11-25 15:36:49.111638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.644 [2024-11-25 15:36:49.111724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.645 [2024-11-25 15:36:49.111776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.645 [2024-11-25 15:36:49.111787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:50.645 15:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.645 15:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66014 00:09:50.645 15:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66014 ']' 00:09:50.645 15:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66014 00:09:50.645 15:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:50.645 15:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.645 15:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66014 00:09:50.645 killing process with pid 66014 00:09:50.645 15:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.645 15:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.645 15:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66014' 00:09:50.645 15:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66014 00:09:50.645 [2024-11-25 15:36:49.158758] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.645 15:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66014 00:09:50.904 [2024-11-25 15:36:49.442305] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.842 15:36:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:51.842 00:09:51.842 real 0m10.425s 00:09:51.842 user 0m16.701s 00:09:51.842 sys 0m1.766s 00:09:51.842 15:36:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.842 15:36:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.842 ************************************ 00:09:51.842 END TEST raid_state_function_test_sb 00:09:51.842 ************************************ 00:09:52.102 15:36:50 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:52.102 15:36:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:52.102 15:36:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.102 15:36:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.102 ************************************ 00:09:52.102 START TEST raid_superblock_test 00:09:52.102 ************************************ 00:09:52.102 15:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:52.102 15:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:52.102 15:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:52.102 15:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:52.102 15:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:52.102 15:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:52.102 15:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:52.102 15:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:52.102 15:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:52.102 15:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:52.102 15:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:52.102 15:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:52.102 15:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:52.102 15:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:52.102 15:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:52.102 15:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:52.102 15:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:52.102 15:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66629 00:09:52.103 15:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:52.103 15:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66629 00:09:52.103 15:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66629 ']' 00:09:52.103 15:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.103 15:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.103 15:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.103 15:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.103 15:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.103 [2024-11-25 15:36:50.661195] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:09:52.103 [2024-11-25 15:36:50.661330] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66629 ] 00:09:52.362 [2024-11-25 15:36:50.834685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.362 [2024-11-25 15:36:50.947286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.622 [2024-11-25 15:36:51.136398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.622 [2024-11-25 15:36:51.136464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.882 malloc1 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.882 [2024-11-25 15:36:51.525781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:52.882 [2024-11-25 15:36:51.525850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.882 [2024-11-25 15:36:51.525888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:52.882 [2024-11-25 15:36:51.525897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.882 [2024-11-25 15:36:51.527953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.882 [2024-11-25 15:36:51.527993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:52.882 pt1 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.882 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.142 malloc2 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.142 [2024-11-25 15:36:51.579054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:53.142 [2024-11-25 15:36:51.579108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.142 [2024-11-25 15:36:51.579128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:53.142 [2024-11-25 15:36:51.579136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.142 [2024-11-25 15:36:51.581124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.142 [2024-11-25 15:36:51.581155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:53.142 pt2 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.142 malloc3 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.142 [2024-11-25 15:36:51.646258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:53.142 [2024-11-25 15:36:51.646311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.142 [2024-11-25 15:36:51.646329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:53.142 [2024-11-25 15:36:51.646337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.142 [2024-11-25 15:36:51.648379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.142 [2024-11-25 15:36:51.648416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:53.142 pt3 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.142 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.142 [2024-11-25 15:36:51.658307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:53.142 [2024-11-25 15:36:51.660128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:53.142 [2024-11-25 15:36:51.660204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:53.143 [2024-11-25 15:36:51.660367] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:53.143 [2024-11-25 15:36:51.660388] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:53.143 [2024-11-25 15:36:51.660630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:53.143 [2024-11-25 15:36:51.660802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:53.143 [2024-11-25 15:36:51.660820] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:53.143 [2024-11-25 15:36:51.660956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.143 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.143 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:53.143 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.143 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.143 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.143 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.143 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.143 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.143 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.143 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.143 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.143 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.143 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.143 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.143 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.143 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.143 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.143 "name": "raid_bdev1", 00:09:53.143 "uuid": "eb8b3af7-8ebb-4ac1-ac61-d88fc66eeec6", 00:09:53.143 "strip_size_kb": 64, 00:09:53.143 "state": "online", 00:09:53.143 "raid_level": "concat", 00:09:53.143 "superblock": true, 00:09:53.143 "num_base_bdevs": 3, 00:09:53.143 "num_base_bdevs_discovered": 3, 00:09:53.143 "num_base_bdevs_operational": 3, 00:09:53.143 "base_bdevs_list": [ 00:09:53.143 { 00:09:53.143 "name": "pt1", 00:09:53.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:53.143 "is_configured": true, 00:09:53.143 "data_offset": 2048, 00:09:53.143 "data_size": 63488 00:09:53.143 }, 00:09:53.143 { 00:09:53.143 "name": "pt2", 00:09:53.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:53.143 "is_configured": true, 00:09:53.143 "data_offset": 2048, 00:09:53.143 "data_size": 63488 00:09:53.143 }, 00:09:53.143 { 00:09:53.143 "name": "pt3", 00:09:53.143 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:53.143 "is_configured": true, 00:09:53.143 "data_offset": 2048, 00:09:53.143 "data_size": 63488 00:09:53.143 } 00:09:53.143 ] 00:09:53.143 }' 00:09:53.143 15:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.143 15:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.403 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:53.403 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:53.403 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:53.403 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:53.403 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:53.403 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:53.664 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:53.664 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:53.664 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.664 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.664 [2024-11-25 15:36:52.093806] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.664 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.664 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:53.664 "name": "raid_bdev1", 00:09:53.664 "aliases": [ 00:09:53.664 "eb8b3af7-8ebb-4ac1-ac61-d88fc66eeec6" 00:09:53.664 ], 00:09:53.664 "product_name": "Raid Volume", 00:09:53.664 "block_size": 512, 00:09:53.664 "num_blocks": 190464, 00:09:53.664 "uuid": "eb8b3af7-8ebb-4ac1-ac61-d88fc66eeec6", 00:09:53.664 "assigned_rate_limits": { 00:09:53.664 "rw_ios_per_sec": 0, 00:09:53.664 "rw_mbytes_per_sec": 0, 00:09:53.664 "r_mbytes_per_sec": 0, 00:09:53.664 "w_mbytes_per_sec": 0 00:09:53.664 }, 00:09:53.664 "claimed": false, 00:09:53.664 "zoned": false, 00:09:53.664 "supported_io_types": { 00:09:53.664 "read": true, 00:09:53.664 "write": true, 00:09:53.664 "unmap": true, 00:09:53.664 "flush": true, 00:09:53.664 "reset": true, 00:09:53.664 "nvme_admin": false, 00:09:53.664 "nvme_io": false, 00:09:53.664 "nvme_io_md": false, 00:09:53.665 "write_zeroes": true, 00:09:53.665 "zcopy": false, 00:09:53.665 "get_zone_info": false, 00:09:53.665 "zone_management": false, 00:09:53.665 "zone_append": false, 00:09:53.665 "compare": false, 00:09:53.665 "compare_and_write": false, 00:09:53.665 "abort": false, 00:09:53.665 "seek_hole": false, 00:09:53.665 "seek_data": false, 00:09:53.665 "copy": false, 00:09:53.665 "nvme_iov_md": false 00:09:53.665 }, 00:09:53.665 "memory_domains": [ 00:09:53.665 { 00:09:53.665 "dma_device_id": "system", 00:09:53.665 "dma_device_type": 1 00:09:53.665 }, 00:09:53.665 { 00:09:53.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.665 "dma_device_type": 2 00:09:53.665 }, 00:09:53.665 { 00:09:53.665 "dma_device_id": "system", 00:09:53.665 "dma_device_type": 1 00:09:53.665 }, 00:09:53.665 { 00:09:53.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.665 "dma_device_type": 2 00:09:53.665 }, 00:09:53.665 { 00:09:53.665 "dma_device_id": "system", 00:09:53.665 "dma_device_type": 1 00:09:53.665 }, 00:09:53.665 { 00:09:53.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.665 "dma_device_type": 2 00:09:53.665 } 00:09:53.665 ], 00:09:53.665 "driver_specific": { 00:09:53.665 "raid": { 00:09:53.665 "uuid": "eb8b3af7-8ebb-4ac1-ac61-d88fc66eeec6", 00:09:53.665 "strip_size_kb": 64, 00:09:53.665 "state": "online", 00:09:53.665 "raid_level": "concat", 00:09:53.665 "superblock": true, 00:09:53.665 "num_base_bdevs": 3, 00:09:53.665 "num_base_bdevs_discovered": 3, 00:09:53.665 "num_base_bdevs_operational": 3, 00:09:53.665 "base_bdevs_list": [ 00:09:53.665 { 00:09:53.665 "name": "pt1", 00:09:53.665 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:53.665 "is_configured": true, 00:09:53.665 "data_offset": 2048, 00:09:53.665 "data_size": 63488 00:09:53.665 }, 00:09:53.665 { 00:09:53.665 "name": "pt2", 00:09:53.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:53.665 "is_configured": true, 00:09:53.665 "data_offset": 2048, 00:09:53.665 "data_size": 63488 00:09:53.665 }, 00:09:53.665 { 00:09:53.665 "name": "pt3", 00:09:53.665 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:53.665 "is_configured": true, 00:09:53.665 "data_offset": 2048, 00:09:53.665 "data_size": 63488 00:09:53.665 } 00:09:53.665 ] 00:09:53.665 } 00:09:53.665 } 00:09:53.665 }' 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:53.665 pt2 00:09:53.665 pt3' 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.665 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:53.665 [2024-11-25 15:36:52.341402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=eb8b3af7-8ebb-4ac1-ac61-d88fc66eeec6 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z eb8b3af7-8ebb-4ac1-ac61-d88fc66eeec6 ']' 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.926 [2024-11-25 15:36:52.377024] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:53.926 [2024-11-25 15:36:52.377053] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.926 [2024-11-25 15:36:52.377128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.926 [2024-11-25 15:36:52.377187] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.926 [2024-11-25 15:36:52.377196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.926 [2024-11-25 15:36:52.524794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:53.926 [2024-11-25 15:36:52.526592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:53.926 [2024-11-25 15:36:52.526647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:53.926 [2024-11-25 15:36:52.526692] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:53.926 [2024-11-25 15:36:52.526741] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:53.926 [2024-11-25 15:36:52.526759] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:53.926 [2024-11-25 15:36:52.526775] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:53.926 [2024-11-25 15:36:52.526784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:53.926 request: 00:09:53.926 { 00:09:53.926 "name": "raid_bdev1", 00:09:53.926 "raid_level": "concat", 00:09:53.926 "base_bdevs": [ 00:09:53.926 "malloc1", 00:09:53.926 "malloc2", 00:09:53.926 "malloc3" 00:09:53.926 ], 00:09:53.926 "strip_size_kb": 64, 00:09:53.926 "superblock": false, 00:09:53.926 "method": "bdev_raid_create", 00:09:53.926 "req_id": 1 00:09:53.926 } 00:09:53.926 Got JSON-RPC error response 00:09:53.926 response: 00:09:53.926 { 00:09:53.926 "code": -17, 00:09:53.926 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:53.926 } 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.926 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.926 [2024-11-25 15:36:52.588632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:53.926 [2024-11-25 15:36:52.588678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.926 [2024-11-25 15:36:52.588696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:53.926 [2024-11-25 15:36:52.588704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.927 [2024-11-25 15:36:52.590736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.927 [2024-11-25 15:36:52.590774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:53.927 [2024-11-25 15:36:52.590849] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:53.927 [2024-11-25 15:36:52.590897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:53.927 pt1 00:09:53.927 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.927 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:53.927 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.927 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.927 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.927 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.927 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.927 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.927 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.927 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.927 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.927 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.927 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.927 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.927 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.187 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.187 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.187 "name": "raid_bdev1", 00:09:54.187 "uuid": "eb8b3af7-8ebb-4ac1-ac61-d88fc66eeec6", 00:09:54.187 "strip_size_kb": 64, 00:09:54.187 "state": "configuring", 00:09:54.187 "raid_level": "concat", 00:09:54.187 "superblock": true, 00:09:54.187 "num_base_bdevs": 3, 00:09:54.187 "num_base_bdevs_discovered": 1, 00:09:54.187 "num_base_bdevs_operational": 3, 00:09:54.187 "base_bdevs_list": [ 00:09:54.187 { 00:09:54.187 "name": "pt1", 00:09:54.187 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.187 "is_configured": true, 00:09:54.187 "data_offset": 2048, 00:09:54.187 "data_size": 63488 00:09:54.187 }, 00:09:54.187 { 00:09:54.187 "name": null, 00:09:54.187 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.187 "is_configured": false, 00:09:54.187 "data_offset": 2048, 00:09:54.187 "data_size": 63488 00:09:54.187 }, 00:09:54.187 { 00:09:54.187 "name": null, 00:09:54.187 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.187 "is_configured": false, 00:09:54.187 "data_offset": 2048, 00:09:54.187 "data_size": 63488 00:09:54.187 } 00:09:54.187 ] 00:09:54.187 }' 00:09:54.187 15:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.187 15:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.447 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:54.447 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:54.447 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.447 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.447 [2024-11-25 15:36:53.055878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:54.447 [2024-11-25 15:36:53.055946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.447 [2024-11-25 15:36:53.055973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:54.447 [2024-11-25 15:36:53.055985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.447 [2024-11-25 15:36:53.056429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.447 [2024-11-25 15:36:53.056454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:54.447 [2024-11-25 15:36:53.056543] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:54.447 [2024-11-25 15:36:53.056568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:54.447 pt2 00:09:54.447 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.447 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:54.447 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.447 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.447 [2024-11-25 15:36:53.067852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:54.447 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.447 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:54.447 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.447 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.447 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.447 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.447 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.447 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.447 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.447 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.448 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.448 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.448 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.448 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.448 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.448 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.448 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.448 "name": "raid_bdev1", 00:09:54.448 "uuid": "eb8b3af7-8ebb-4ac1-ac61-d88fc66eeec6", 00:09:54.448 "strip_size_kb": 64, 00:09:54.448 "state": "configuring", 00:09:54.448 "raid_level": "concat", 00:09:54.448 "superblock": true, 00:09:54.448 "num_base_bdevs": 3, 00:09:54.448 "num_base_bdevs_discovered": 1, 00:09:54.448 "num_base_bdevs_operational": 3, 00:09:54.448 "base_bdevs_list": [ 00:09:54.448 { 00:09:54.448 "name": "pt1", 00:09:54.448 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.448 "is_configured": true, 00:09:54.448 "data_offset": 2048, 00:09:54.448 "data_size": 63488 00:09:54.448 }, 00:09:54.448 { 00:09:54.448 "name": null, 00:09:54.448 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.448 "is_configured": false, 00:09:54.448 "data_offset": 0, 00:09:54.448 "data_size": 63488 00:09:54.448 }, 00:09:54.448 { 00:09:54.448 "name": null, 00:09:54.448 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.448 "is_configured": false, 00:09:54.448 "data_offset": 2048, 00:09:54.448 "data_size": 63488 00:09:54.448 } 00:09:54.448 ] 00:09:54.448 }' 00:09:54.448 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.448 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.019 [2024-11-25 15:36:53.519061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:55.019 [2024-11-25 15:36:53.519131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.019 [2024-11-25 15:36:53.519148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:55.019 [2024-11-25 15:36:53.519159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.019 [2024-11-25 15:36:53.519623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.019 [2024-11-25 15:36:53.519652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:55.019 [2024-11-25 15:36:53.519735] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:55.019 [2024-11-25 15:36:53.519765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:55.019 pt2 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.019 [2024-11-25 15:36:53.530999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:55.019 [2024-11-25 15:36:53.531057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.019 [2024-11-25 15:36:53.531070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:55.019 [2024-11-25 15:36:53.531079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.019 [2024-11-25 15:36:53.531420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.019 [2024-11-25 15:36:53.531448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:55.019 [2024-11-25 15:36:53.531505] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:55.019 [2024-11-25 15:36:53.531533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:55.019 [2024-11-25 15:36:53.531636] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:55.019 [2024-11-25 15:36:53.531653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:55.019 [2024-11-25 15:36:53.531885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:55.019 [2024-11-25 15:36:53.532038] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:55.019 [2024-11-25 15:36:53.532052] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:55.019 [2024-11-25 15:36:53.532183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.019 pt3 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.019 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.019 "name": "raid_bdev1", 00:09:55.019 "uuid": "eb8b3af7-8ebb-4ac1-ac61-d88fc66eeec6", 00:09:55.019 "strip_size_kb": 64, 00:09:55.019 "state": "online", 00:09:55.019 "raid_level": "concat", 00:09:55.019 "superblock": true, 00:09:55.019 "num_base_bdevs": 3, 00:09:55.019 "num_base_bdevs_discovered": 3, 00:09:55.019 "num_base_bdevs_operational": 3, 00:09:55.019 "base_bdevs_list": [ 00:09:55.019 { 00:09:55.019 "name": "pt1", 00:09:55.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.019 "is_configured": true, 00:09:55.019 "data_offset": 2048, 00:09:55.019 "data_size": 63488 00:09:55.019 }, 00:09:55.019 { 00:09:55.019 "name": "pt2", 00:09:55.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.020 "is_configured": true, 00:09:55.020 "data_offset": 2048, 00:09:55.020 "data_size": 63488 00:09:55.020 }, 00:09:55.020 { 00:09:55.020 "name": "pt3", 00:09:55.020 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.020 "is_configured": true, 00:09:55.020 "data_offset": 2048, 00:09:55.020 "data_size": 63488 00:09:55.020 } 00:09:55.020 ] 00:09:55.020 }' 00:09:55.020 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.020 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.280 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:55.280 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:55.280 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:55.280 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:55.280 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:55.280 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:55.280 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:55.280 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:55.280 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.280 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.280 [2024-11-25 15:36:53.950618] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.540 15:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.540 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:55.540 "name": "raid_bdev1", 00:09:55.540 "aliases": [ 00:09:55.540 "eb8b3af7-8ebb-4ac1-ac61-d88fc66eeec6" 00:09:55.540 ], 00:09:55.540 "product_name": "Raid Volume", 00:09:55.540 "block_size": 512, 00:09:55.540 "num_blocks": 190464, 00:09:55.540 "uuid": "eb8b3af7-8ebb-4ac1-ac61-d88fc66eeec6", 00:09:55.540 "assigned_rate_limits": { 00:09:55.540 "rw_ios_per_sec": 0, 00:09:55.540 "rw_mbytes_per_sec": 0, 00:09:55.540 "r_mbytes_per_sec": 0, 00:09:55.540 "w_mbytes_per_sec": 0 00:09:55.540 }, 00:09:55.540 "claimed": false, 00:09:55.540 "zoned": false, 00:09:55.540 "supported_io_types": { 00:09:55.540 "read": true, 00:09:55.540 "write": true, 00:09:55.540 "unmap": true, 00:09:55.540 "flush": true, 00:09:55.540 "reset": true, 00:09:55.540 "nvme_admin": false, 00:09:55.540 "nvme_io": false, 00:09:55.540 "nvme_io_md": false, 00:09:55.540 "write_zeroes": true, 00:09:55.540 "zcopy": false, 00:09:55.540 "get_zone_info": false, 00:09:55.540 "zone_management": false, 00:09:55.540 "zone_append": false, 00:09:55.540 "compare": false, 00:09:55.540 "compare_and_write": false, 00:09:55.540 "abort": false, 00:09:55.540 "seek_hole": false, 00:09:55.540 "seek_data": false, 00:09:55.540 "copy": false, 00:09:55.540 "nvme_iov_md": false 00:09:55.540 }, 00:09:55.540 "memory_domains": [ 00:09:55.540 { 00:09:55.540 "dma_device_id": "system", 00:09:55.540 "dma_device_type": 1 00:09:55.540 }, 00:09:55.540 { 00:09:55.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.540 "dma_device_type": 2 00:09:55.540 }, 00:09:55.540 { 00:09:55.540 "dma_device_id": "system", 00:09:55.540 "dma_device_type": 1 00:09:55.540 }, 00:09:55.540 { 00:09:55.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.540 "dma_device_type": 2 00:09:55.540 }, 00:09:55.540 { 00:09:55.540 "dma_device_id": "system", 00:09:55.540 "dma_device_type": 1 00:09:55.540 }, 00:09:55.540 { 00:09:55.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.540 "dma_device_type": 2 00:09:55.540 } 00:09:55.540 ], 00:09:55.540 "driver_specific": { 00:09:55.540 "raid": { 00:09:55.540 "uuid": "eb8b3af7-8ebb-4ac1-ac61-d88fc66eeec6", 00:09:55.540 "strip_size_kb": 64, 00:09:55.540 "state": "online", 00:09:55.540 "raid_level": "concat", 00:09:55.540 "superblock": true, 00:09:55.540 "num_base_bdevs": 3, 00:09:55.540 "num_base_bdevs_discovered": 3, 00:09:55.540 "num_base_bdevs_operational": 3, 00:09:55.540 "base_bdevs_list": [ 00:09:55.540 { 00:09:55.540 "name": "pt1", 00:09:55.540 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.540 "is_configured": true, 00:09:55.540 "data_offset": 2048, 00:09:55.540 "data_size": 63488 00:09:55.540 }, 00:09:55.540 { 00:09:55.540 "name": "pt2", 00:09:55.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.540 "is_configured": true, 00:09:55.540 "data_offset": 2048, 00:09:55.541 "data_size": 63488 00:09:55.541 }, 00:09:55.541 { 00:09:55.541 "name": "pt3", 00:09:55.541 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.541 "is_configured": true, 00:09:55.541 "data_offset": 2048, 00:09:55.541 "data_size": 63488 00:09:55.541 } 00:09:55.541 ] 00:09:55.541 } 00:09:55.541 } 00:09:55.541 }' 00:09:55.541 15:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:55.541 pt2 00:09:55.541 pt3' 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:55.541 [2024-11-25 15:36:54.194191] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.541 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.801 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' eb8b3af7-8ebb-4ac1-ac61-d88fc66eeec6 '!=' eb8b3af7-8ebb-4ac1-ac61-d88fc66eeec6 ']' 00:09:55.801 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:55.801 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:55.801 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:55.801 15:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66629 00:09:55.801 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66629 ']' 00:09:55.801 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66629 00:09:55.802 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:55.802 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.802 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66629 00:09:55.802 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.802 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.802 killing process with pid 66629 00:09:55.802 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66629' 00:09:55.802 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66629 00:09:55.802 [2024-11-25 15:36:54.277631] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.802 [2024-11-25 15:36:54.277744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.802 [2024-11-25 15:36:54.277807] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.802 15:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66629 00:09:55.802 [2024-11-25 15:36:54.277819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:56.061 [2024-11-25 15:36:54.563929] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.002 15:36:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:57.002 00:09:57.002 real 0m5.043s 00:09:57.002 user 0m7.237s 00:09:57.002 sys 0m0.858s 00:09:57.002 15:36:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.002 15:36:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.002 ************************************ 00:09:57.002 END TEST raid_superblock_test 00:09:57.002 ************************************ 00:09:57.002 15:36:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:57.002 15:36:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:57.002 15:36:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.002 15:36:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.261 ************************************ 00:09:57.261 START TEST raid_read_error_test 00:09:57.261 ************************************ 00:09:57.261 15:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qKBeHkWQuO 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66882 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66882 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 66882 ']' 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.262 15:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.262 [2024-11-25 15:36:55.786732] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:09:57.262 [2024-11-25 15:36:55.786881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66882 ] 00:09:57.522 [2024-11-25 15:36:55.958836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.522 [2024-11-25 15:36:56.070115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.781 [2024-11-25 15:36:56.256130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.781 [2024-11-25 15:36:56.256165] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.041 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.041 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:58.041 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:58.041 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:58.041 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.041 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.041 BaseBdev1_malloc 00:09:58.041 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.041 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:58.041 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.041 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.041 true 00:09:58.041 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.041 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:58.041 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.041 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.041 [2024-11-25 15:36:56.661694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:58.041 [2024-11-25 15:36:56.661753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.041 [2024-11-25 15:36:56.661772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:58.041 [2024-11-25 15:36:56.661783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.042 [2024-11-25 15:36:56.663868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.042 [2024-11-25 15:36:56.663909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:58.042 BaseBdev1 00:09:58.042 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.042 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:58.042 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:58.042 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.042 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.042 BaseBdev2_malloc 00:09:58.042 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.042 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:58.042 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.042 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.042 true 00:09:58.042 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.042 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:58.042 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.042 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.302 [2024-11-25 15:36:56.727068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:58.302 [2024-11-25 15:36:56.727138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.302 [2024-11-25 15:36:56.727155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:58.302 [2024-11-25 15:36:56.727165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.302 [2024-11-25 15:36:56.729136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.302 [2024-11-25 15:36:56.729175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:58.302 BaseBdev2 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.302 BaseBdev3_malloc 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.302 true 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.302 [2024-11-25 15:36:56.802649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:58.302 [2024-11-25 15:36:56.802770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.302 [2024-11-25 15:36:56.802806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:58.302 [2024-11-25 15:36:56.802838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.302 [2024-11-25 15:36:56.804908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.302 [2024-11-25 15:36:56.804996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:58.302 BaseBdev3 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.302 [2024-11-25 15:36:56.814703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.302 [2024-11-25 15:36:56.816436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.302 [2024-11-25 15:36:56.816515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.302 [2024-11-25 15:36:56.816700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:58.302 [2024-11-25 15:36:56.816712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:58.302 [2024-11-25 15:36:56.816950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:58.302 [2024-11-25 15:36:56.817104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:58.302 [2024-11-25 15:36:56.817117] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:58.302 [2024-11-25 15:36:56.817257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.302 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.303 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.303 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.303 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.303 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.303 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.303 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.303 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.303 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.303 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.303 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.303 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.303 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.303 "name": "raid_bdev1", 00:09:58.303 "uuid": "6dcc56ae-9cfa-4ad9-ac2a-8a3cdf64cf79", 00:09:58.303 "strip_size_kb": 64, 00:09:58.303 "state": "online", 00:09:58.303 "raid_level": "concat", 00:09:58.303 "superblock": true, 00:09:58.303 "num_base_bdevs": 3, 00:09:58.303 "num_base_bdevs_discovered": 3, 00:09:58.303 "num_base_bdevs_operational": 3, 00:09:58.303 "base_bdevs_list": [ 00:09:58.303 { 00:09:58.303 "name": "BaseBdev1", 00:09:58.303 "uuid": "a004a90e-c575-5fe8-9b99-26dae170ede8", 00:09:58.303 "is_configured": true, 00:09:58.303 "data_offset": 2048, 00:09:58.303 "data_size": 63488 00:09:58.303 }, 00:09:58.303 { 00:09:58.303 "name": "BaseBdev2", 00:09:58.303 "uuid": "d200c793-1945-527c-bb67-16d9fb3c644e", 00:09:58.303 "is_configured": true, 00:09:58.303 "data_offset": 2048, 00:09:58.303 "data_size": 63488 00:09:58.303 }, 00:09:58.303 { 00:09:58.303 "name": "BaseBdev3", 00:09:58.303 "uuid": "bda9a2f4-2984-5456-a012-d13e910445e8", 00:09:58.303 "is_configured": true, 00:09:58.303 "data_offset": 2048, 00:09:58.303 "data_size": 63488 00:09:58.303 } 00:09:58.303 ] 00:09:58.303 }' 00:09:58.303 15:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.303 15:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.875 15:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:58.875 15:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:58.875 [2024-11-25 15:36:57.378899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.817 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.817 "name": "raid_bdev1", 00:09:59.817 "uuid": "6dcc56ae-9cfa-4ad9-ac2a-8a3cdf64cf79", 00:09:59.817 "strip_size_kb": 64, 00:09:59.817 "state": "online", 00:09:59.817 "raid_level": "concat", 00:09:59.817 "superblock": true, 00:09:59.817 "num_base_bdevs": 3, 00:09:59.817 "num_base_bdevs_discovered": 3, 00:09:59.817 "num_base_bdevs_operational": 3, 00:09:59.817 "base_bdevs_list": [ 00:09:59.817 { 00:09:59.817 "name": "BaseBdev1", 00:09:59.817 "uuid": "a004a90e-c575-5fe8-9b99-26dae170ede8", 00:09:59.817 "is_configured": true, 00:09:59.817 "data_offset": 2048, 00:09:59.817 "data_size": 63488 00:09:59.817 }, 00:09:59.817 { 00:09:59.817 "name": "BaseBdev2", 00:09:59.817 "uuid": "d200c793-1945-527c-bb67-16d9fb3c644e", 00:09:59.817 "is_configured": true, 00:09:59.817 "data_offset": 2048, 00:09:59.817 "data_size": 63488 00:09:59.817 }, 00:09:59.817 { 00:09:59.817 "name": "BaseBdev3", 00:09:59.817 "uuid": "bda9a2f4-2984-5456-a012-d13e910445e8", 00:09:59.817 "is_configured": true, 00:09:59.817 "data_offset": 2048, 00:09:59.817 "data_size": 63488 00:09:59.818 } 00:09:59.818 ] 00:09:59.818 }' 00:09:59.818 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.818 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.077 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:00.077 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.077 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.077 [2024-11-25 15:36:58.732734] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:00.077 [2024-11-25 15:36:58.732770] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.077 [2024-11-25 15:36:58.735290] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.077 [2024-11-25 15:36:58.735335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.077 [2024-11-25 15:36:58.735371] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.077 [2024-11-25 15:36:58.735382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:00.077 { 00:10:00.077 "results": [ 00:10:00.077 { 00:10:00.077 "job": "raid_bdev1", 00:10:00.077 "core_mask": "0x1", 00:10:00.077 "workload": "randrw", 00:10:00.077 "percentage": 50, 00:10:00.077 "status": "finished", 00:10:00.077 "queue_depth": 1, 00:10:00.077 "io_size": 131072, 00:10:00.077 "runtime": 1.354647, 00:10:00.077 "iops": 16622.042495203546, 00:10:00.077 "mibps": 2077.7553119004433, 00:10:00.077 "io_failed": 1, 00:10:00.077 "io_timeout": 0, 00:10:00.077 "avg_latency_us": 83.66121852639189, 00:10:00.077 "min_latency_us": 24.258515283842794, 00:10:00.077 "max_latency_us": 1373.6803493449781 00:10:00.077 } 00:10:00.077 ], 00:10:00.077 "core_count": 1 00:10:00.077 } 00:10:00.077 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.077 15:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66882 00:10:00.077 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 66882 ']' 00:10:00.077 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 66882 00:10:00.077 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:00.077 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.077 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66882 00:10:00.338 killing process with pid 66882 00:10:00.338 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.338 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.338 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66882' 00:10:00.338 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 66882 00:10:00.338 [2024-11-25 15:36:58.781301] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:00.338 15:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 66882 00:10:00.338 [2024-11-25 15:36:59.005531] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.720 15:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qKBeHkWQuO 00:10:01.720 15:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:01.720 15:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:01.720 15:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:01.720 15:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:01.720 15:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.720 15:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:01.720 15:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:01.720 00:10:01.720 real 0m4.444s 00:10:01.720 user 0m5.329s 00:10:01.720 sys 0m0.525s 00:10:01.720 15:37:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.720 15:37:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.720 ************************************ 00:10:01.720 END TEST raid_read_error_test 00:10:01.720 ************************************ 00:10:01.720 15:37:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:10:01.720 15:37:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:01.720 15:37:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.720 15:37:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.720 ************************************ 00:10:01.720 START TEST raid_write_error_test 00:10:01.720 ************************************ 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zEi2yTZxNq 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67030 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67030 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67030 ']' 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.720 15:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.720 [2024-11-25 15:37:00.301097] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:10:01.720 [2024-11-25 15:37:00.301230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67030 ] 00:10:01.980 [2024-11-25 15:37:00.465046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.980 [2024-11-25 15:37:00.575468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.241 [2024-11-25 15:37:00.769174] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.241 [2024-11-25 15:37:00.769244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.500 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.500 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:02.500 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:02.500 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:02.500 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.500 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.500 BaseBdev1_malloc 00:10:02.500 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.500 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:02.500 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.500 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.500 true 00:10:02.501 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.501 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:02.501 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.501 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.761 [2024-11-25 15:37:01.185280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:02.761 [2024-11-25 15:37:01.185335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.761 [2024-11-25 15:37:01.185356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:02.761 [2024-11-25 15:37:01.185367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.761 [2024-11-25 15:37:01.187471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.761 [2024-11-25 15:37:01.187509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:02.761 BaseBdev1 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.762 BaseBdev2_malloc 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.762 true 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.762 [2024-11-25 15:37:01.251097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:02.762 [2024-11-25 15:37:01.251156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.762 [2024-11-25 15:37:01.251189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:02.762 [2024-11-25 15:37:01.251199] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.762 [2024-11-25 15:37:01.253162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.762 [2024-11-25 15:37:01.253209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:02.762 BaseBdev2 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.762 BaseBdev3_malloc 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.762 true 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.762 [2024-11-25 15:37:01.328881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:02.762 [2024-11-25 15:37:01.328937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.762 [2024-11-25 15:37:01.328970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:02.762 [2024-11-25 15:37:01.328980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.762 [2024-11-25 15:37:01.330970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.762 [2024-11-25 15:37:01.331021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:02.762 BaseBdev3 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.762 [2024-11-25 15:37:01.340911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.762 [2024-11-25 15:37:01.342656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.762 [2024-11-25 15:37:01.342738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.762 [2024-11-25 15:37:01.342927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:02.762 [2024-11-25 15:37:01.342946] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:02.762 [2024-11-25 15:37:01.343190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:02.762 [2024-11-25 15:37:01.343350] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:02.762 [2024-11-25 15:37:01.343370] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:02.762 [2024-11-25 15:37:01.343509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.762 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.762 "name": "raid_bdev1", 00:10:02.762 "uuid": "78b84a5d-3571-4e8b-9008-b7a96b278b7e", 00:10:02.762 "strip_size_kb": 64, 00:10:02.762 "state": "online", 00:10:02.762 "raid_level": "concat", 00:10:02.762 "superblock": true, 00:10:02.762 "num_base_bdevs": 3, 00:10:02.762 "num_base_bdevs_discovered": 3, 00:10:02.762 "num_base_bdevs_operational": 3, 00:10:02.762 "base_bdevs_list": [ 00:10:02.762 { 00:10:02.762 "name": "BaseBdev1", 00:10:02.762 "uuid": "aab4a80b-021c-52a8-a328-0c0ca979999d", 00:10:02.762 "is_configured": true, 00:10:02.762 "data_offset": 2048, 00:10:02.762 "data_size": 63488 00:10:02.762 }, 00:10:02.762 { 00:10:02.762 "name": "BaseBdev2", 00:10:02.762 "uuid": "826d9de8-0ea5-57b7-8d7c-43a8dfb3c750", 00:10:02.762 "is_configured": true, 00:10:02.762 "data_offset": 2048, 00:10:02.762 "data_size": 63488 00:10:02.762 }, 00:10:02.762 { 00:10:02.762 "name": "BaseBdev3", 00:10:02.763 "uuid": "0747bf09-2774-55d2-888d-5dd55e0c7611", 00:10:02.763 "is_configured": true, 00:10:02.763 "data_offset": 2048, 00:10:02.763 "data_size": 63488 00:10:02.763 } 00:10:02.763 ] 00:10:02.763 }' 00:10:02.763 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.763 15:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.022 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:03.022 15:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:03.283 [2024-11-25 15:37:01.785432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.222 "name": "raid_bdev1", 00:10:04.222 "uuid": "78b84a5d-3571-4e8b-9008-b7a96b278b7e", 00:10:04.222 "strip_size_kb": 64, 00:10:04.222 "state": "online", 00:10:04.222 "raid_level": "concat", 00:10:04.222 "superblock": true, 00:10:04.222 "num_base_bdevs": 3, 00:10:04.222 "num_base_bdevs_discovered": 3, 00:10:04.222 "num_base_bdevs_operational": 3, 00:10:04.222 "base_bdevs_list": [ 00:10:04.222 { 00:10:04.222 "name": "BaseBdev1", 00:10:04.222 "uuid": "aab4a80b-021c-52a8-a328-0c0ca979999d", 00:10:04.222 "is_configured": true, 00:10:04.222 "data_offset": 2048, 00:10:04.222 "data_size": 63488 00:10:04.222 }, 00:10:04.222 { 00:10:04.222 "name": "BaseBdev2", 00:10:04.222 "uuid": "826d9de8-0ea5-57b7-8d7c-43a8dfb3c750", 00:10:04.222 "is_configured": true, 00:10:04.222 "data_offset": 2048, 00:10:04.222 "data_size": 63488 00:10:04.222 }, 00:10:04.222 { 00:10:04.222 "name": "BaseBdev3", 00:10:04.222 "uuid": "0747bf09-2774-55d2-888d-5dd55e0c7611", 00:10:04.222 "is_configured": true, 00:10:04.222 "data_offset": 2048, 00:10:04.222 "data_size": 63488 00:10:04.222 } 00:10:04.222 ] 00:10:04.222 }' 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.222 15:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.503 15:37:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:04.503 15:37:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.503 15:37:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.503 [2024-11-25 15:37:03.165332] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:04.503 [2024-11-25 15:37:03.165367] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.503 [2024-11-25 15:37:03.167982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.503 [2024-11-25 15:37:03.168040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.503 [2024-11-25 15:37:03.168078] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.503 [2024-11-25 15:37:03.168090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:04.503 { 00:10:04.503 "results": [ 00:10:04.503 { 00:10:04.503 "job": "raid_bdev1", 00:10:04.503 "core_mask": "0x1", 00:10:04.503 "workload": "randrw", 00:10:04.503 "percentage": 50, 00:10:04.503 "status": "finished", 00:10:04.503 "queue_depth": 1, 00:10:04.503 "io_size": 131072, 00:10:04.503 "runtime": 1.380759, 00:10:04.503 "iops": 16330.148852913506, 00:10:04.503 "mibps": 2041.2686066141882, 00:10:04.503 "io_failed": 1, 00:10:04.503 "io_timeout": 0, 00:10:04.503 "avg_latency_us": 85.18682027940703, 00:10:04.503 "min_latency_us": 25.2646288209607, 00:10:04.503 "max_latency_us": 1387.989519650655 00:10:04.503 } 00:10:04.503 ], 00:10:04.503 "core_count": 1 00:10:04.503 } 00:10:04.503 15:37:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.503 15:37:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67030 00:10:04.503 15:37:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67030 ']' 00:10:04.503 15:37:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67030 00:10:04.503 15:37:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:04.774 15:37:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.774 15:37:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67030 00:10:04.774 15:37:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.774 15:37:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.774 killing process with pid 67030 00:10:04.774 15:37:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67030' 00:10:04.774 15:37:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67030 00:10:04.774 [2024-11-25 15:37:03.204348] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:04.774 15:37:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67030 00:10:04.774 [2024-11-25 15:37:03.425215] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:06.154 15:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zEi2yTZxNq 00:10:06.154 15:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:06.154 15:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:06.154 15:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:06.154 15:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:06.154 15:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:06.154 15:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:06.154 15:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:06.154 00:10:06.154 real 0m4.360s 00:10:06.154 user 0m5.154s 00:10:06.154 sys 0m0.525s 00:10:06.154 15:37:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.154 15:37:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.154 ************************************ 00:10:06.154 END TEST raid_write_error_test 00:10:06.154 ************************************ 00:10:06.154 15:37:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:06.154 15:37:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:06.154 15:37:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:06.154 15:37:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.154 15:37:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:06.154 ************************************ 00:10:06.154 START TEST raid_state_function_test 00:10:06.154 ************************************ 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67168 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67168' 00:10:06.154 Process raid pid: 67168 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67168 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67168 ']' 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.154 15:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.154 [2024-11-25 15:37:04.722997] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:10:06.155 [2024-11-25 15:37:04.723132] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.414 [2024-11-25 15:37:04.894423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.414 [2024-11-25 15:37:05.010547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.672 [2024-11-25 15:37:05.205819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.672 [2024-11-25 15:37:05.205869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.932 [2024-11-25 15:37:05.546697] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.932 [2024-11-25 15:37:05.546752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.932 [2024-11-25 15:37:05.546763] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.932 [2024-11-25 15:37:05.546772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.932 [2024-11-25 15:37:05.546779] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.932 [2024-11-25 15:37:05.546787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.932 "name": "Existed_Raid", 00:10:06.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.932 "strip_size_kb": 0, 00:10:06.932 "state": "configuring", 00:10:06.932 "raid_level": "raid1", 00:10:06.932 "superblock": false, 00:10:06.932 "num_base_bdevs": 3, 00:10:06.932 "num_base_bdevs_discovered": 0, 00:10:06.932 "num_base_bdevs_operational": 3, 00:10:06.932 "base_bdevs_list": [ 00:10:06.932 { 00:10:06.932 "name": "BaseBdev1", 00:10:06.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.932 "is_configured": false, 00:10:06.932 "data_offset": 0, 00:10:06.932 "data_size": 0 00:10:06.932 }, 00:10:06.932 { 00:10:06.932 "name": "BaseBdev2", 00:10:06.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.932 "is_configured": false, 00:10:06.932 "data_offset": 0, 00:10:06.932 "data_size": 0 00:10:06.932 }, 00:10:06.932 { 00:10:06.932 "name": "BaseBdev3", 00:10:06.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.932 "is_configured": false, 00:10:06.932 "data_offset": 0, 00:10:06.932 "data_size": 0 00:10:06.932 } 00:10:06.932 ] 00:10:06.932 }' 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.932 15:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.501 15:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:07.501 15:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.501 15:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.501 [2024-11-25 15:37:05.977910] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:07.501 [2024-11-25 15:37:05.977991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:07.501 15:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.501 15:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:07.501 15:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.501 15:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.501 [2024-11-25 15:37:05.989874] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:07.501 [2024-11-25 15:37:05.989955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:07.501 [2024-11-25 15:37:05.989983] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:07.501 [2024-11-25 15:37:05.990017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:07.501 [2024-11-25 15:37:05.990052] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:07.501 [2024-11-25 15:37:05.990073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:07.501 15:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.501 15:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:07.501 15:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.501 15:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.501 [2024-11-25 15:37:06.035840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.501 BaseBdev1 00:10:07.501 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.501 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:07.501 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:07.501 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.501 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:07.501 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.501 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.501 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.502 [ 00:10:07.502 { 00:10:07.502 "name": "BaseBdev1", 00:10:07.502 "aliases": [ 00:10:07.502 "53aa7d3f-a922-42ae-ac15-7cfdd7056b55" 00:10:07.502 ], 00:10:07.502 "product_name": "Malloc disk", 00:10:07.502 "block_size": 512, 00:10:07.502 "num_blocks": 65536, 00:10:07.502 "uuid": "53aa7d3f-a922-42ae-ac15-7cfdd7056b55", 00:10:07.502 "assigned_rate_limits": { 00:10:07.502 "rw_ios_per_sec": 0, 00:10:07.502 "rw_mbytes_per_sec": 0, 00:10:07.502 "r_mbytes_per_sec": 0, 00:10:07.502 "w_mbytes_per_sec": 0 00:10:07.502 }, 00:10:07.502 "claimed": true, 00:10:07.502 "claim_type": "exclusive_write", 00:10:07.502 "zoned": false, 00:10:07.502 "supported_io_types": { 00:10:07.502 "read": true, 00:10:07.502 "write": true, 00:10:07.502 "unmap": true, 00:10:07.502 "flush": true, 00:10:07.502 "reset": true, 00:10:07.502 "nvme_admin": false, 00:10:07.502 "nvme_io": false, 00:10:07.502 "nvme_io_md": false, 00:10:07.502 "write_zeroes": true, 00:10:07.502 "zcopy": true, 00:10:07.502 "get_zone_info": false, 00:10:07.502 "zone_management": false, 00:10:07.502 "zone_append": false, 00:10:07.502 "compare": false, 00:10:07.502 "compare_and_write": false, 00:10:07.502 "abort": true, 00:10:07.502 "seek_hole": false, 00:10:07.502 "seek_data": false, 00:10:07.502 "copy": true, 00:10:07.502 "nvme_iov_md": false 00:10:07.502 }, 00:10:07.502 "memory_domains": [ 00:10:07.502 { 00:10:07.502 "dma_device_id": "system", 00:10:07.502 "dma_device_type": 1 00:10:07.502 }, 00:10:07.502 { 00:10:07.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.502 "dma_device_type": 2 00:10:07.502 } 00:10:07.502 ], 00:10:07.502 "driver_specific": {} 00:10:07.502 } 00:10:07.502 ] 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.502 "name": "Existed_Raid", 00:10:07.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.502 "strip_size_kb": 0, 00:10:07.502 "state": "configuring", 00:10:07.502 "raid_level": "raid1", 00:10:07.502 "superblock": false, 00:10:07.502 "num_base_bdevs": 3, 00:10:07.502 "num_base_bdevs_discovered": 1, 00:10:07.502 "num_base_bdevs_operational": 3, 00:10:07.502 "base_bdevs_list": [ 00:10:07.502 { 00:10:07.502 "name": "BaseBdev1", 00:10:07.502 "uuid": "53aa7d3f-a922-42ae-ac15-7cfdd7056b55", 00:10:07.502 "is_configured": true, 00:10:07.502 "data_offset": 0, 00:10:07.502 "data_size": 65536 00:10:07.502 }, 00:10:07.502 { 00:10:07.502 "name": "BaseBdev2", 00:10:07.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.502 "is_configured": false, 00:10:07.502 "data_offset": 0, 00:10:07.502 "data_size": 0 00:10:07.502 }, 00:10:07.502 { 00:10:07.502 "name": "BaseBdev3", 00:10:07.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.502 "is_configured": false, 00:10:07.502 "data_offset": 0, 00:10:07.502 "data_size": 0 00:10:07.502 } 00:10:07.502 ] 00:10:07.502 }' 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.502 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.070 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:08.070 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.070 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.070 [2024-11-25 15:37:06.503087] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:08.071 [2024-11-25 15:37:06.503184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.071 [2024-11-25 15:37:06.515106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.071 [2024-11-25 15:37:06.516947] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.071 [2024-11-25 15:37:06.517035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.071 [2024-11-25 15:37:06.517069] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.071 [2024-11-25 15:37:06.517094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.071 "name": "Existed_Raid", 00:10:08.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.071 "strip_size_kb": 0, 00:10:08.071 "state": "configuring", 00:10:08.071 "raid_level": "raid1", 00:10:08.071 "superblock": false, 00:10:08.071 "num_base_bdevs": 3, 00:10:08.071 "num_base_bdevs_discovered": 1, 00:10:08.071 "num_base_bdevs_operational": 3, 00:10:08.071 "base_bdevs_list": [ 00:10:08.071 { 00:10:08.071 "name": "BaseBdev1", 00:10:08.071 "uuid": "53aa7d3f-a922-42ae-ac15-7cfdd7056b55", 00:10:08.071 "is_configured": true, 00:10:08.071 "data_offset": 0, 00:10:08.071 "data_size": 65536 00:10:08.071 }, 00:10:08.071 { 00:10:08.071 "name": "BaseBdev2", 00:10:08.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.071 "is_configured": false, 00:10:08.071 "data_offset": 0, 00:10:08.071 "data_size": 0 00:10:08.071 }, 00:10:08.071 { 00:10:08.071 "name": "BaseBdev3", 00:10:08.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.071 "is_configured": false, 00:10:08.071 "data_offset": 0, 00:10:08.071 "data_size": 0 00:10:08.071 } 00:10:08.071 ] 00:10:08.071 }' 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.071 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.331 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:08.331 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.331 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.331 [2024-11-25 15:37:06.965553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.331 BaseBdev2 00:10:08.331 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.331 15:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:08.331 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:08.331 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.331 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:08.331 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.331 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.331 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.331 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.331 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.331 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.331 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:08.331 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.331 15:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.331 [ 00:10:08.331 { 00:10:08.331 "name": "BaseBdev2", 00:10:08.331 "aliases": [ 00:10:08.331 "8ffa94e8-4060-4302-b034-d700e8bf8abe" 00:10:08.331 ], 00:10:08.331 "product_name": "Malloc disk", 00:10:08.331 "block_size": 512, 00:10:08.331 "num_blocks": 65536, 00:10:08.331 "uuid": "8ffa94e8-4060-4302-b034-d700e8bf8abe", 00:10:08.331 "assigned_rate_limits": { 00:10:08.331 "rw_ios_per_sec": 0, 00:10:08.331 "rw_mbytes_per_sec": 0, 00:10:08.331 "r_mbytes_per_sec": 0, 00:10:08.331 "w_mbytes_per_sec": 0 00:10:08.331 }, 00:10:08.331 "claimed": true, 00:10:08.331 "claim_type": "exclusive_write", 00:10:08.331 "zoned": false, 00:10:08.331 "supported_io_types": { 00:10:08.331 "read": true, 00:10:08.331 "write": true, 00:10:08.331 "unmap": true, 00:10:08.331 "flush": true, 00:10:08.331 "reset": true, 00:10:08.331 "nvme_admin": false, 00:10:08.331 "nvme_io": false, 00:10:08.331 "nvme_io_md": false, 00:10:08.331 "write_zeroes": true, 00:10:08.331 "zcopy": true, 00:10:08.331 "get_zone_info": false, 00:10:08.331 "zone_management": false, 00:10:08.331 "zone_append": false, 00:10:08.331 "compare": false, 00:10:08.331 "compare_and_write": false, 00:10:08.331 "abort": true, 00:10:08.331 "seek_hole": false, 00:10:08.331 "seek_data": false, 00:10:08.331 "copy": true, 00:10:08.331 "nvme_iov_md": false 00:10:08.331 }, 00:10:08.331 "memory_domains": [ 00:10:08.331 { 00:10:08.331 "dma_device_id": "system", 00:10:08.331 "dma_device_type": 1 00:10:08.331 }, 00:10:08.331 { 00:10:08.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.331 "dma_device_type": 2 00:10:08.331 } 00:10:08.331 ], 00:10:08.331 "driver_specific": {} 00:10:08.331 } 00:10:08.331 ] 00:10:08.331 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.331 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:08.331 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:08.331 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:08.331 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.331 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.331 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.331 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.331 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.331 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.331 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.331 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.331 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.331 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.331 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.331 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.331 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.331 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.590 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.590 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.590 "name": "Existed_Raid", 00:10:08.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.590 "strip_size_kb": 0, 00:10:08.590 "state": "configuring", 00:10:08.590 "raid_level": "raid1", 00:10:08.590 "superblock": false, 00:10:08.590 "num_base_bdevs": 3, 00:10:08.590 "num_base_bdevs_discovered": 2, 00:10:08.590 "num_base_bdevs_operational": 3, 00:10:08.590 "base_bdevs_list": [ 00:10:08.590 { 00:10:08.590 "name": "BaseBdev1", 00:10:08.590 "uuid": "53aa7d3f-a922-42ae-ac15-7cfdd7056b55", 00:10:08.590 "is_configured": true, 00:10:08.590 "data_offset": 0, 00:10:08.590 "data_size": 65536 00:10:08.590 }, 00:10:08.590 { 00:10:08.590 "name": "BaseBdev2", 00:10:08.590 "uuid": "8ffa94e8-4060-4302-b034-d700e8bf8abe", 00:10:08.590 "is_configured": true, 00:10:08.590 "data_offset": 0, 00:10:08.590 "data_size": 65536 00:10:08.590 }, 00:10:08.590 { 00:10:08.591 "name": "BaseBdev3", 00:10:08.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.591 "is_configured": false, 00:10:08.591 "data_offset": 0, 00:10:08.591 "data_size": 0 00:10:08.591 } 00:10:08.591 ] 00:10:08.591 }' 00:10:08.591 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.591 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.850 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:08.850 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.850 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.850 [2024-11-25 15:37:07.516607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.850 [2024-11-25 15:37:07.516742] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:08.850 [2024-11-25 15:37:07.516773] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:08.850 [2024-11-25 15:37:07.517120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:08.850 [2024-11-25 15:37:07.517320] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:08.850 [2024-11-25 15:37:07.517361] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:08.850 [2024-11-25 15:37:07.517655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.850 BaseBdev3 00:10:08.850 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.850 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:08.850 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:08.850 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.850 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:08.850 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.850 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.850 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.850 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.850 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.110 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.110 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:09.110 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.110 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.110 [ 00:10:09.110 { 00:10:09.110 "name": "BaseBdev3", 00:10:09.110 "aliases": [ 00:10:09.110 "d1fd780e-27ce-42a7-825a-57661fb89949" 00:10:09.110 ], 00:10:09.110 "product_name": "Malloc disk", 00:10:09.110 "block_size": 512, 00:10:09.110 "num_blocks": 65536, 00:10:09.110 "uuid": "d1fd780e-27ce-42a7-825a-57661fb89949", 00:10:09.110 "assigned_rate_limits": { 00:10:09.110 "rw_ios_per_sec": 0, 00:10:09.110 "rw_mbytes_per_sec": 0, 00:10:09.110 "r_mbytes_per_sec": 0, 00:10:09.110 "w_mbytes_per_sec": 0 00:10:09.110 }, 00:10:09.110 "claimed": true, 00:10:09.110 "claim_type": "exclusive_write", 00:10:09.110 "zoned": false, 00:10:09.110 "supported_io_types": { 00:10:09.110 "read": true, 00:10:09.110 "write": true, 00:10:09.110 "unmap": true, 00:10:09.110 "flush": true, 00:10:09.110 "reset": true, 00:10:09.110 "nvme_admin": false, 00:10:09.110 "nvme_io": false, 00:10:09.110 "nvme_io_md": false, 00:10:09.110 "write_zeroes": true, 00:10:09.110 "zcopy": true, 00:10:09.110 "get_zone_info": false, 00:10:09.110 "zone_management": false, 00:10:09.110 "zone_append": false, 00:10:09.110 "compare": false, 00:10:09.110 "compare_and_write": false, 00:10:09.110 "abort": true, 00:10:09.110 "seek_hole": false, 00:10:09.110 "seek_data": false, 00:10:09.110 "copy": true, 00:10:09.110 "nvme_iov_md": false 00:10:09.110 }, 00:10:09.110 "memory_domains": [ 00:10:09.110 { 00:10:09.110 "dma_device_id": "system", 00:10:09.110 "dma_device_type": 1 00:10:09.110 }, 00:10:09.110 { 00:10:09.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.110 "dma_device_type": 2 00:10:09.111 } 00:10:09.111 ], 00:10:09.111 "driver_specific": {} 00:10:09.111 } 00:10:09.111 ] 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.111 "name": "Existed_Raid", 00:10:09.111 "uuid": "1d8e3c62-20ae-42ea-8983-8a32ae229023", 00:10:09.111 "strip_size_kb": 0, 00:10:09.111 "state": "online", 00:10:09.111 "raid_level": "raid1", 00:10:09.111 "superblock": false, 00:10:09.111 "num_base_bdevs": 3, 00:10:09.111 "num_base_bdevs_discovered": 3, 00:10:09.111 "num_base_bdevs_operational": 3, 00:10:09.111 "base_bdevs_list": [ 00:10:09.111 { 00:10:09.111 "name": "BaseBdev1", 00:10:09.111 "uuid": "53aa7d3f-a922-42ae-ac15-7cfdd7056b55", 00:10:09.111 "is_configured": true, 00:10:09.111 "data_offset": 0, 00:10:09.111 "data_size": 65536 00:10:09.111 }, 00:10:09.111 { 00:10:09.111 "name": "BaseBdev2", 00:10:09.111 "uuid": "8ffa94e8-4060-4302-b034-d700e8bf8abe", 00:10:09.111 "is_configured": true, 00:10:09.111 "data_offset": 0, 00:10:09.111 "data_size": 65536 00:10:09.111 }, 00:10:09.111 { 00:10:09.111 "name": "BaseBdev3", 00:10:09.111 "uuid": "d1fd780e-27ce-42a7-825a-57661fb89949", 00:10:09.111 "is_configured": true, 00:10:09.111 "data_offset": 0, 00:10:09.111 "data_size": 65536 00:10:09.111 } 00:10:09.111 ] 00:10:09.111 }' 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.111 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.371 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:09.371 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:09.371 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:09.371 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:09.371 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:09.371 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:09.371 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:09.371 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:09.371 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.371 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.371 [2024-11-25 15:37:07.948271] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:09.371 15:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.371 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:09.371 "name": "Existed_Raid", 00:10:09.371 "aliases": [ 00:10:09.371 "1d8e3c62-20ae-42ea-8983-8a32ae229023" 00:10:09.371 ], 00:10:09.371 "product_name": "Raid Volume", 00:10:09.371 "block_size": 512, 00:10:09.371 "num_blocks": 65536, 00:10:09.371 "uuid": "1d8e3c62-20ae-42ea-8983-8a32ae229023", 00:10:09.371 "assigned_rate_limits": { 00:10:09.371 "rw_ios_per_sec": 0, 00:10:09.371 "rw_mbytes_per_sec": 0, 00:10:09.371 "r_mbytes_per_sec": 0, 00:10:09.371 "w_mbytes_per_sec": 0 00:10:09.371 }, 00:10:09.371 "claimed": false, 00:10:09.371 "zoned": false, 00:10:09.371 "supported_io_types": { 00:10:09.371 "read": true, 00:10:09.371 "write": true, 00:10:09.371 "unmap": false, 00:10:09.371 "flush": false, 00:10:09.371 "reset": true, 00:10:09.371 "nvme_admin": false, 00:10:09.371 "nvme_io": false, 00:10:09.371 "nvme_io_md": false, 00:10:09.371 "write_zeroes": true, 00:10:09.371 "zcopy": false, 00:10:09.371 "get_zone_info": false, 00:10:09.371 "zone_management": false, 00:10:09.371 "zone_append": false, 00:10:09.371 "compare": false, 00:10:09.371 "compare_and_write": false, 00:10:09.371 "abort": false, 00:10:09.371 "seek_hole": false, 00:10:09.371 "seek_data": false, 00:10:09.371 "copy": false, 00:10:09.371 "nvme_iov_md": false 00:10:09.371 }, 00:10:09.371 "memory_domains": [ 00:10:09.371 { 00:10:09.371 "dma_device_id": "system", 00:10:09.371 "dma_device_type": 1 00:10:09.371 }, 00:10:09.371 { 00:10:09.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.371 "dma_device_type": 2 00:10:09.371 }, 00:10:09.371 { 00:10:09.371 "dma_device_id": "system", 00:10:09.371 "dma_device_type": 1 00:10:09.371 }, 00:10:09.371 { 00:10:09.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.371 "dma_device_type": 2 00:10:09.371 }, 00:10:09.371 { 00:10:09.371 "dma_device_id": "system", 00:10:09.371 "dma_device_type": 1 00:10:09.371 }, 00:10:09.371 { 00:10:09.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.371 "dma_device_type": 2 00:10:09.371 } 00:10:09.371 ], 00:10:09.371 "driver_specific": { 00:10:09.371 "raid": { 00:10:09.371 "uuid": "1d8e3c62-20ae-42ea-8983-8a32ae229023", 00:10:09.371 "strip_size_kb": 0, 00:10:09.371 "state": "online", 00:10:09.371 "raid_level": "raid1", 00:10:09.371 "superblock": false, 00:10:09.371 "num_base_bdevs": 3, 00:10:09.371 "num_base_bdevs_discovered": 3, 00:10:09.371 "num_base_bdevs_operational": 3, 00:10:09.371 "base_bdevs_list": [ 00:10:09.371 { 00:10:09.371 "name": "BaseBdev1", 00:10:09.371 "uuid": "53aa7d3f-a922-42ae-ac15-7cfdd7056b55", 00:10:09.371 "is_configured": true, 00:10:09.371 "data_offset": 0, 00:10:09.371 "data_size": 65536 00:10:09.371 }, 00:10:09.371 { 00:10:09.371 "name": "BaseBdev2", 00:10:09.371 "uuid": "8ffa94e8-4060-4302-b034-d700e8bf8abe", 00:10:09.371 "is_configured": true, 00:10:09.371 "data_offset": 0, 00:10:09.371 "data_size": 65536 00:10:09.371 }, 00:10:09.371 { 00:10:09.371 "name": "BaseBdev3", 00:10:09.371 "uuid": "d1fd780e-27ce-42a7-825a-57661fb89949", 00:10:09.371 "is_configured": true, 00:10:09.371 "data_offset": 0, 00:10:09.371 "data_size": 65536 00:10:09.371 } 00:10:09.371 ] 00:10:09.371 } 00:10:09.371 } 00:10:09.371 }' 00:10:09.371 15:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:09.371 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:09.371 BaseBdev2 00:10:09.371 BaseBdev3' 00:10:09.371 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.632 [2024-11-25 15:37:08.167607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.632 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.892 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.892 "name": "Existed_Raid", 00:10:09.892 "uuid": "1d8e3c62-20ae-42ea-8983-8a32ae229023", 00:10:09.892 "strip_size_kb": 0, 00:10:09.892 "state": "online", 00:10:09.892 "raid_level": "raid1", 00:10:09.892 "superblock": false, 00:10:09.892 "num_base_bdevs": 3, 00:10:09.892 "num_base_bdevs_discovered": 2, 00:10:09.892 "num_base_bdevs_operational": 2, 00:10:09.892 "base_bdevs_list": [ 00:10:09.892 { 00:10:09.892 "name": null, 00:10:09.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.892 "is_configured": false, 00:10:09.892 "data_offset": 0, 00:10:09.892 "data_size": 65536 00:10:09.892 }, 00:10:09.892 { 00:10:09.892 "name": "BaseBdev2", 00:10:09.892 "uuid": "8ffa94e8-4060-4302-b034-d700e8bf8abe", 00:10:09.892 "is_configured": true, 00:10:09.892 "data_offset": 0, 00:10:09.892 "data_size": 65536 00:10:09.892 }, 00:10:09.892 { 00:10:09.892 "name": "BaseBdev3", 00:10:09.892 "uuid": "d1fd780e-27ce-42a7-825a-57661fb89949", 00:10:09.892 "is_configured": true, 00:10:09.892 "data_offset": 0, 00:10:09.892 "data_size": 65536 00:10:09.892 } 00:10:09.892 ] 00:10:09.892 }' 00:10:09.892 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.892 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.151 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:10.151 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:10.151 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.151 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.151 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.151 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:10.151 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.151 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:10.151 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:10.151 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:10.151 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.151 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.151 [2024-11-25 15:37:08.743093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.411 [2024-11-25 15:37:08.895991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:10.411 [2024-11-25 15:37:08.896141] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.411 [2024-11-25 15:37:08.990347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.411 [2024-11-25 15:37:08.990479] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.411 [2024-11-25 15:37:08.990531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.411 15:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.411 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.411 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:10.411 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:10.411 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:10.411 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:10.411 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:10.411 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:10.411 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.411 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.411 BaseBdev2 00:10:10.411 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.411 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:10.411 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:10.411 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.411 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:10.411 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.411 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.411 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.411 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.411 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.670 [ 00:10:10.670 { 00:10:10.670 "name": "BaseBdev2", 00:10:10.670 "aliases": [ 00:10:10.670 "00c4a1f6-7ea9-4555-b320-d4b8c5b286cb" 00:10:10.670 ], 00:10:10.670 "product_name": "Malloc disk", 00:10:10.670 "block_size": 512, 00:10:10.670 "num_blocks": 65536, 00:10:10.670 "uuid": "00c4a1f6-7ea9-4555-b320-d4b8c5b286cb", 00:10:10.670 "assigned_rate_limits": { 00:10:10.670 "rw_ios_per_sec": 0, 00:10:10.670 "rw_mbytes_per_sec": 0, 00:10:10.670 "r_mbytes_per_sec": 0, 00:10:10.670 "w_mbytes_per_sec": 0 00:10:10.670 }, 00:10:10.670 "claimed": false, 00:10:10.670 "zoned": false, 00:10:10.670 "supported_io_types": { 00:10:10.670 "read": true, 00:10:10.670 "write": true, 00:10:10.670 "unmap": true, 00:10:10.670 "flush": true, 00:10:10.670 "reset": true, 00:10:10.670 "nvme_admin": false, 00:10:10.670 "nvme_io": false, 00:10:10.670 "nvme_io_md": false, 00:10:10.670 "write_zeroes": true, 00:10:10.670 "zcopy": true, 00:10:10.670 "get_zone_info": false, 00:10:10.670 "zone_management": false, 00:10:10.670 "zone_append": false, 00:10:10.670 "compare": false, 00:10:10.670 "compare_and_write": false, 00:10:10.670 "abort": true, 00:10:10.670 "seek_hole": false, 00:10:10.670 "seek_data": false, 00:10:10.670 "copy": true, 00:10:10.670 "nvme_iov_md": false 00:10:10.670 }, 00:10:10.670 "memory_domains": [ 00:10:10.670 { 00:10:10.670 "dma_device_id": "system", 00:10:10.670 "dma_device_type": 1 00:10:10.670 }, 00:10:10.670 { 00:10:10.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.670 "dma_device_type": 2 00:10:10.670 } 00:10:10.670 ], 00:10:10.670 "driver_specific": {} 00:10:10.670 } 00:10:10.670 ] 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.670 BaseBdev3 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.670 [ 00:10:10.670 { 00:10:10.670 "name": "BaseBdev3", 00:10:10.670 "aliases": [ 00:10:10.670 "c5547057-9658-41e6-bcdc-866e2f1f636b" 00:10:10.670 ], 00:10:10.670 "product_name": "Malloc disk", 00:10:10.670 "block_size": 512, 00:10:10.670 "num_blocks": 65536, 00:10:10.670 "uuid": "c5547057-9658-41e6-bcdc-866e2f1f636b", 00:10:10.670 "assigned_rate_limits": { 00:10:10.670 "rw_ios_per_sec": 0, 00:10:10.670 "rw_mbytes_per_sec": 0, 00:10:10.670 "r_mbytes_per_sec": 0, 00:10:10.670 "w_mbytes_per_sec": 0 00:10:10.670 }, 00:10:10.670 "claimed": false, 00:10:10.670 "zoned": false, 00:10:10.670 "supported_io_types": { 00:10:10.670 "read": true, 00:10:10.670 "write": true, 00:10:10.670 "unmap": true, 00:10:10.670 "flush": true, 00:10:10.670 "reset": true, 00:10:10.670 "nvme_admin": false, 00:10:10.670 "nvme_io": false, 00:10:10.670 "nvme_io_md": false, 00:10:10.670 "write_zeroes": true, 00:10:10.670 "zcopy": true, 00:10:10.670 "get_zone_info": false, 00:10:10.670 "zone_management": false, 00:10:10.670 "zone_append": false, 00:10:10.670 "compare": false, 00:10:10.670 "compare_and_write": false, 00:10:10.670 "abort": true, 00:10:10.670 "seek_hole": false, 00:10:10.670 "seek_data": false, 00:10:10.670 "copy": true, 00:10:10.670 "nvme_iov_md": false 00:10:10.670 }, 00:10:10.670 "memory_domains": [ 00:10:10.670 { 00:10:10.670 "dma_device_id": "system", 00:10:10.670 "dma_device_type": 1 00:10:10.670 }, 00:10:10.670 { 00:10:10.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.670 "dma_device_type": 2 00:10:10.670 } 00:10:10.670 ], 00:10:10.670 "driver_specific": {} 00:10:10.670 } 00:10:10.670 ] 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.670 [2024-11-25 15:37:09.205676] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.670 [2024-11-25 15:37:09.205766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.670 [2024-11-25 15:37:09.205811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.670 [2024-11-25 15:37:09.207568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.670 "name": "Existed_Raid", 00:10:10.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.670 "strip_size_kb": 0, 00:10:10.670 "state": "configuring", 00:10:10.670 "raid_level": "raid1", 00:10:10.670 "superblock": false, 00:10:10.670 "num_base_bdevs": 3, 00:10:10.670 "num_base_bdevs_discovered": 2, 00:10:10.670 "num_base_bdevs_operational": 3, 00:10:10.670 "base_bdevs_list": [ 00:10:10.670 { 00:10:10.670 "name": "BaseBdev1", 00:10:10.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.670 "is_configured": false, 00:10:10.670 "data_offset": 0, 00:10:10.670 "data_size": 0 00:10:10.670 }, 00:10:10.670 { 00:10:10.670 "name": "BaseBdev2", 00:10:10.670 "uuid": "00c4a1f6-7ea9-4555-b320-d4b8c5b286cb", 00:10:10.670 "is_configured": true, 00:10:10.670 "data_offset": 0, 00:10:10.670 "data_size": 65536 00:10:10.670 }, 00:10:10.670 { 00:10:10.670 "name": "BaseBdev3", 00:10:10.670 "uuid": "c5547057-9658-41e6-bcdc-866e2f1f636b", 00:10:10.670 "is_configured": true, 00:10:10.670 "data_offset": 0, 00:10:10.670 "data_size": 65536 00:10:10.670 } 00:10:10.670 ] 00:10:10.670 }' 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.670 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.237 [2024-11-25 15:37:09.676896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.237 "name": "Existed_Raid", 00:10:11.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.237 "strip_size_kb": 0, 00:10:11.237 "state": "configuring", 00:10:11.237 "raid_level": "raid1", 00:10:11.237 "superblock": false, 00:10:11.237 "num_base_bdevs": 3, 00:10:11.237 "num_base_bdevs_discovered": 1, 00:10:11.237 "num_base_bdevs_operational": 3, 00:10:11.237 "base_bdevs_list": [ 00:10:11.237 { 00:10:11.237 "name": "BaseBdev1", 00:10:11.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.237 "is_configured": false, 00:10:11.237 "data_offset": 0, 00:10:11.237 "data_size": 0 00:10:11.237 }, 00:10:11.237 { 00:10:11.237 "name": null, 00:10:11.237 "uuid": "00c4a1f6-7ea9-4555-b320-d4b8c5b286cb", 00:10:11.237 "is_configured": false, 00:10:11.237 "data_offset": 0, 00:10:11.237 "data_size": 65536 00:10:11.237 }, 00:10:11.237 { 00:10:11.237 "name": "BaseBdev3", 00:10:11.237 "uuid": "c5547057-9658-41e6-bcdc-866e2f1f636b", 00:10:11.237 "is_configured": true, 00:10:11.237 "data_offset": 0, 00:10:11.237 "data_size": 65536 00:10:11.237 } 00:10:11.237 ] 00:10:11.237 }' 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.237 15:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.496 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.496 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:11.496 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.496 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.496 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.496 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:11.496 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:11.496 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.496 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.756 [2024-11-25 15:37:10.213375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.756 BaseBdev1 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.756 [ 00:10:11.756 { 00:10:11.756 "name": "BaseBdev1", 00:10:11.756 "aliases": [ 00:10:11.756 "82822f82-9eb0-4f18-8913-583968efd92c" 00:10:11.756 ], 00:10:11.756 "product_name": "Malloc disk", 00:10:11.756 "block_size": 512, 00:10:11.756 "num_blocks": 65536, 00:10:11.756 "uuid": "82822f82-9eb0-4f18-8913-583968efd92c", 00:10:11.756 "assigned_rate_limits": { 00:10:11.756 "rw_ios_per_sec": 0, 00:10:11.756 "rw_mbytes_per_sec": 0, 00:10:11.756 "r_mbytes_per_sec": 0, 00:10:11.756 "w_mbytes_per_sec": 0 00:10:11.756 }, 00:10:11.756 "claimed": true, 00:10:11.756 "claim_type": "exclusive_write", 00:10:11.756 "zoned": false, 00:10:11.756 "supported_io_types": { 00:10:11.756 "read": true, 00:10:11.756 "write": true, 00:10:11.756 "unmap": true, 00:10:11.756 "flush": true, 00:10:11.756 "reset": true, 00:10:11.756 "nvme_admin": false, 00:10:11.756 "nvme_io": false, 00:10:11.756 "nvme_io_md": false, 00:10:11.756 "write_zeroes": true, 00:10:11.756 "zcopy": true, 00:10:11.756 "get_zone_info": false, 00:10:11.756 "zone_management": false, 00:10:11.756 "zone_append": false, 00:10:11.756 "compare": false, 00:10:11.756 "compare_and_write": false, 00:10:11.756 "abort": true, 00:10:11.756 "seek_hole": false, 00:10:11.756 "seek_data": false, 00:10:11.756 "copy": true, 00:10:11.756 "nvme_iov_md": false 00:10:11.756 }, 00:10:11.756 "memory_domains": [ 00:10:11.756 { 00:10:11.756 "dma_device_id": "system", 00:10:11.756 "dma_device_type": 1 00:10:11.756 }, 00:10:11.756 { 00:10:11.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.756 "dma_device_type": 2 00:10:11.756 } 00:10:11.756 ], 00:10:11.756 "driver_specific": {} 00:10:11.756 } 00:10:11.756 ] 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.756 "name": "Existed_Raid", 00:10:11.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.756 "strip_size_kb": 0, 00:10:11.756 "state": "configuring", 00:10:11.756 "raid_level": "raid1", 00:10:11.756 "superblock": false, 00:10:11.756 "num_base_bdevs": 3, 00:10:11.756 "num_base_bdevs_discovered": 2, 00:10:11.756 "num_base_bdevs_operational": 3, 00:10:11.756 "base_bdevs_list": [ 00:10:11.756 { 00:10:11.756 "name": "BaseBdev1", 00:10:11.756 "uuid": "82822f82-9eb0-4f18-8913-583968efd92c", 00:10:11.756 "is_configured": true, 00:10:11.756 "data_offset": 0, 00:10:11.756 "data_size": 65536 00:10:11.756 }, 00:10:11.756 { 00:10:11.756 "name": null, 00:10:11.756 "uuid": "00c4a1f6-7ea9-4555-b320-d4b8c5b286cb", 00:10:11.756 "is_configured": false, 00:10:11.756 "data_offset": 0, 00:10:11.756 "data_size": 65536 00:10:11.756 }, 00:10:11.756 { 00:10:11.756 "name": "BaseBdev3", 00:10:11.756 "uuid": "c5547057-9658-41e6-bcdc-866e2f1f636b", 00:10:11.756 "is_configured": true, 00:10:11.756 "data_offset": 0, 00:10:11.756 "data_size": 65536 00:10:11.756 } 00:10:11.756 ] 00:10:11.756 }' 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.756 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.016 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.016 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.016 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.016 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.276 [2024-11-25 15:37:10.748479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.276 "name": "Existed_Raid", 00:10:12.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.276 "strip_size_kb": 0, 00:10:12.276 "state": "configuring", 00:10:12.276 "raid_level": "raid1", 00:10:12.276 "superblock": false, 00:10:12.276 "num_base_bdevs": 3, 00:10:12.276 "num_base_bdevs_discovered": 1, 00:10:12.276 "num_base_bdevs_operational": 3, 00:10:12.276 "base_bdevs_list": [ 00:10:12.276 { 00:10:12.276 "name": "BaseBdev1", 00:10:12.276 "uuid": "82822f82-9eb0-4f18-8913-583968efd92c", 00:10:12.276 "is_configured": true, 00:10:12.276 "data_offset": 0, 00:10:12.276 "data_size": 65536 00:10:12.276 }, 00:10:12.276 { 00:10:12.276 "name": null, 00:10:12.276 "uuid": "00c4a1f6-7ea9-4555-b320-d4b8c5b286cb", 00:10:12.276 "is_configured": false, 00:10:12.276 "data_offset": 0, 00:10:12.276 "data_size": 65536 00:10:12.276 }, 00:10:12.276 { 00:10:12.276 "name": null, 00:10:12.276 "uuid": "c5547057-9658-41e6-bcdc-866e2f1f636b", 00:10:12.276 "is_configured": false, 00:10:12.276 "data_offset": 0, 00:10:12.276 "data_size": 65536 00:10:12.276 } 00:10:12.276 ] 00:10:12.276 }' 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.276 15:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.537 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.537 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:12.537 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.537 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.537 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.537 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:12.537 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:12.537 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.537 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.537 [2024-11-25 15:37:11.215732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.797 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.797 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:12.797 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.797 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.797 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.797 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.797 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.797 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.797 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.797 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.797 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.797 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.797 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.797 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.797 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.797 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.797 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.797 "name": "Existed_Raid", 00:10:12.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.797 "strip_size_kb": 0, 00:10:12.797 "state": "configuring", 00:10:12.797 "raid_level": "raid1", 00:10:12.797 "superblock": false, 00:10:12.797 "num_base_bdevs": 3, 00:10:12.797 "num_base_bdevs_discovered": 2, 00:10:12.797 "num_base_bdevs_operational": 3, 00:10:12.797 "base_bdevs_list": [ 00:10:12.797 { 00:10:12.797 "name": "BaseBdev1", 00:10:12.797 "uuid": "82822f82-9eb0-4f18-8913-583968efd92c", 00:10:12.797 "is_configured": true, 00:10:12.797 "data_offset": 0, 00:10:12.797 "data_size": 65536 00:10:12.797 }, 00:10:12.797 { 00:10:12.797 "name": null, 00:10:12.797 "uuid": "00c4a1f6-7ea9-4555-b320-d4b8c5b286cb", 00:10:12.797 "is_configured": false, 00:10:12.797 "data_offset": 0, 00:10:12.797 "data_size": 65536 00:10:12.797 }, 00:10:12.797 { 00:10:12.797 "name": "BaseBdev3", 00:10:12.797 "uuid": "c5547057-9658-41e6-bcdc-866e2f1f636b", 00:10:12.797 "is_configured": true, 00:10:12.797 "data_offset": 0, 00:10:12.797 "data_size": 65536 00:10:12.797 } 00:10:12.797 ] 00:10:12.797 }' 00:10:12.797 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.797 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.057 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.057 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.057 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:13.057 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.057 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.057 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:13.057 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:13.057 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.057 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.057 [2024-11-25 15:37:11.690934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:13.317 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.317 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.317 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.317 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.317 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.317 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.317 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.317 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.317 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.317 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.317 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.317 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.317 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.317 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.317 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.317 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.317 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.317 "name": "Existed_Raid", 00:10:13.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.317 "strip_size_kb": 0, 00:10:13.317 "state": "configuring", 00:10:13.317 "raid_level": "raid1", 00:10:13.317 "superblock": false, 00:10:13.317 "num_base_bdevs": 3, 00:10:13.317 "num_base_bdevs_discovered": 1, 00:10:13.317 "num_base_bdevs_operational": 3, 00:10:13.317 "base_bdevs_list": [ 00:10:13.317 { 00:10:13.317 "name": null, 00:10:13.317 "uuid": "82822f82-9eb0-4f18-8913-583968efd92c", 00:10:13.317 "is_configured": false, 00:10:13.317 "data_offset": 0, 00:10:13.317 "data_size": 65536 00:10:13.317 }, 00:10:13.317 { 00:10:13.317 "name": null, 00:10:13.317 "uuid": "00c4a1f6-7ea9-4555-b320-d4b8c5b286cb", 00:10:13.317 "is_configured": false, 00:10:13.317 "data_offset": 0, 00:10:13.317 "data_size": 65536 00:10:13.317 }, 00:10:13.317 { 00:10:13.317 "name": "BaseBdev3", 00:10:13.317 "uuid": "c5547057-9658-41e6-bcdc-866e2f1f636b", 00:10:13.317 "is_configured": true, 00:10:13.317 "data_offset": 0, 00:10:13.317 "data_size": 65536 00:10:13.317 } 00:10:13.317 ] 00:10:13.317 }' 00:10:13.317 15:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.317 15:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.576 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.576 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:13.576 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.576 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.576 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.834 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:13.834 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:13.834 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.834 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.834 [2024-11-25 15:37:12.265843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.835 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.835 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.835 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.835 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.835 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.835 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.835 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.835 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.835 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.835 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.835 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.835 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.835 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.835 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.835 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.835 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.835 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.835 "name": "Existed_Raid", 00:10:13.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.835 "strip_size_kb": 0, 00:10:13.835 "state": "configuring", 00:10:13.835 "raid_level": "raid1", 00:10:13.835 "superblock": false, 00:10:13.835 "num_base_bdevs": 3, 00:10:13.835 "num_base_bdevs_discovered": 2, 00:10:13.835 "num_base_bdevs_operational": 3, 00:10:13.835 "base_bdevs_list": [ 00:10:13.835 { 00:10:13.835 "name": null, 00:10:13.835 "uuid": "82822f82-9eb0-4f18-8913-583968efd92c", 00:10:13.835 "is_configured": false, 00:10:13.835 "data_offset": 0, 00:10:13.835 "data_size": 65536 00:10:13.835 }, 00:10:13.835 { 00:10:13.835 "name": "BaseBdev2", 00:10:13.835 "uuid": "00c4a1f6-7ea9-4555-b320-d4b8c5b286cb", 00:10:13.835 "is_configured": true, 00:10:13.835 "data_offset": 0, 00:10:13.835 "data_size": 65536 00:10:13.835 }, 00:10:13.835 { 00:10:13.835 "name": "BaseBdev3", 00:10:13.835 "uuid": "c5547057-9658-41e6-bcdc-866e2f1f636b", 00:10:13.835 "is_configured": true, 00:10:13.835 "data_offset": 0, 00:10:13.835 "data_size": 65536 00:10:13.835 } 00:10:13.835 ] 00:10:13.835 }' 00:10:13.835 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.835 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 82822f82-9eb0-4f18-8913-583968efd92c 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.094 [2024-11-25 15:37:12.757385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:14.094 [2024-11-25 15:37:12.757498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:14.094 [2024-11-25 15:37:12.757522] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:14.094 [2024-11-25 15:37:12.757804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:14.094 [2024-11-25 15:37:12.758001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:14.094 [2024-11-25 15:37:12.758066] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:14.094 [2024-11-25 15:37:12.758337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.094 NewBaseBdev 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.094 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.353 [ 00:10:14.353 { 00:10:14.353 "name": "NewBaseBdev", 00:10:14.353 "aliases": [ 00:10:14.353 "82822f82-9eb0-4f18-8913-583968efd92c" 00:10:14.353 ], 00:10:14.353 "product_name": "Malloc disk", 00:10:14.353 "block_size": 512, 00:10:14.353 "num_blocks": 65536, 00:10:14.353 "uuid": "82822f82-9eb0-4f18-8913-583968efd92c", 00:10:14.353 "assigned_rate_limits": { 00:10:14.353 "rw_ios_per_sec": 0, 00:10:14.353 "rw_mbytes_per_sec": 0, 00:10:14.353 "r_mbytes_per_sec": 0, 00:10:14.353 "w_mbytes_per_sec": 0 00:10:14.353 }, 00:10:14.353 "claimed": true, 00:10:14.353 "claim_type": "exclusive_write", 00:10:14.353 "zoned": false, 00:10:14.353 "supported_io_types": { 00:10:14.353 "read": true, 00:10:14.353 "write": true, 00:10:14.353 "unmap": true, 00:10:14.353 "flush": true, 00:10:14.353 "reset": true, 00:10:14.353 "nvme_admin": false, 00:10:14.353 "nvme_io": false, 00:10:14.353 "nvme_io_md": false, 00:10:14.353 "write_zeroes": true, 00:10:14.353 "zcopy": true, 00:10:14.353 "get_zone_info": false, 00:10:14.353 "zone_management": false, 00:10:14.353 "zone_append": false, 00:10:14.353 "compare": false, 00:10:14.353 "compare_and_write": false, 00:10:14.353 "abort": true, 00:10:14.353 "seek_hole": false, 00:10:14.353 "seek_data": false, 00:10:14.353 "copy": true, 00:10:14.353 "nvme_iov_md": false 00:10:14.353 }, 00:10:14.353 "memory_domains": [ 00:10:14.353 { 00:10:14.353 "dma_device_id": "system", 00:10:14.353 "dma_device_type": 1 00:10:14.353 }, 00:10:14.353 { 00:10:14.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.353 "dma_device_type": 2 00:10:14.353 } 00:10:14.353 ], 00:10:14.353 "driver_specific": {} 00:10:14.353 } 00:10:14.353 ] 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.353 "name": "Existed_Raid", 00:10:14.353 "uuid": "0f366f31-c832-4fb8-a2fe-a015790f3456", 00:10:14.353 "strip_size_kb": 0, 00:10:14.353 "state": "online", 00:10:14.353 "raid_level": "raid1", 00:10:14.353 "superblock": false, 00:10:14.353 "num_base_bdevs": 3, 00:10:14.353 "num_base_bdevs_discovered": 3, 00:10:14.353 "num_base_bdevs_operational": 3, 00:10:14.353 "base_bdevs_list": [ 00:10:14.353 { 00:10:14.353 "name": "NewBaseBdev", 00:10:14.353 "uuid": "82822f82-9eb0-4f18-8913-583968efd92c", 00:10:14.353 "is_configured": true, 00:10:14.353 "data_offset": 0, 00:10:14.353 "data_size": 65536 00:10:14.353 }, 00:10:14.353 { 00:10:14.353 "name": "BaseBdev2", 00:10:14.353 "uuid": "00c4a1f6-7ea9-4555-b320-d4b8c5b286cb", 00:10:14.353 "is_configured": true, 00:10:14.353 "data_offset": 0, 00:10:14.353 "data_size": 65536 00:10:14.353 }, 00:10:14.353 { 00:10:14.353 "name": "BaseBdev3", 00:10:14.353 "uuid": "c5547057-9658-41e6-bcdc-866e2f1f636b", 00:10:14.353 "is_configured": true, 00:10:14.353 "data_offset": 0, 00:10:14.353 "data_size": 65536 00:10:14.353 } 00:10:14.353 ] 00:10:14.353 }' 00:10:14.353 15:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.354 15:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.613 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:14.613 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:14.613 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:14.613 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:14.613 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:14.613 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:14.613 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:14.613 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.613 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.613 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:14.613 [2024-11-25 15:37:13.256876] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.613 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.613 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:14.613 "name": "Existed_Raid", 00:10:14.613 "aliases": [ 00:10:14.613 "0f366f31-c832-4fb8-a2fe-a015790f3456" 00:10:14.613 ], 00:10:14.613 "product_name": "Raid Volume", 00:10:14.613 "block_size": 512, 00:10:14.613 "num_blocks": 65536, 00:10:14.613 "uuid": "0f366f31-c832-4fb8-a2fe-a015790f3456", 00:10:14.613 "assigned_rate_limits": { 00:10:14.613 "rw_ios_per_sec": 0, 00:10:14.613 "rw_mbytes_per_sec": 0, 00:10:14.613 "r_mbytes_per_sec": 0, 00:10:14.613 "w_mbytes_per_sec": 0 00:10:14.613 }, 00:10:14.613 "claimed": false, 00:10:14.613 "zoned": false, 00:10:14.613 "supported_io_types": { 00:10:14.613 "read": true, 00:10:14.613 "write": true, 00:10:14.613 "unmap": false, 00:10:14.613 "flush": false, 00:10:14.613 "reset": true, 00:10:14.613 "nvme_admin": false, 00:10:14.613 "nvme_io": false, 00:10:14.613 "nvme_io_md": false, 00:10:14.613 "write_zeroes": true, 00:10:14.613 "zcopy": false, 00:10:14.613 "get_zone_info": false, 00:10:14.613 "zone_management": false, 00:10:14.613 "zone_append": false, 00:10:14.613 "compare": false, 00:10:14.613 "compare_and_write": false, 00:10:14.613 "abort": false, 00:10:14.613 "seek_hole": false, 00:10:14.613 "seek_data": false, 00:10:14.613 "copy": false, 00:10:14.613 "nvme_iov_md": false 00:10:14.613 }, 00:10:14.613 "memory_domains": [ 00:10:14.613 { 00:10:14.613 "dma_device_id": "system", 00:10:14.613 "dma_device_type": 1 00:10:14.613 }, 00:10:14.613 { 00:10:14.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.613 "dma_device_type": 2 00:10:14.613 }, 00:10:14.613 { 00:10:14.613 "dma_device_id": "system", 00:10:14.613 "dma_device_type": 1 00:10:14.613 }, 00:10:14.613 { 00:10:14.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.613 "dma_device_type": 2 00:10:14.613 }, 00:10:14.613 { 00:10:14.613 "dma_device_id": "system", 00:10:14.613 "dma_device_type": 1 00:10:14.613 }, 00:10:14.613 { 00:10:14.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.613 "dma_device_type": 2 00:10:14.613 } 00:10:14.613 ], 00:10:14.613 "driver_specific": { 00:10:14.613 "raid": { 00:10:14.613 "uuid": "0f366f31-c832-4fb8-a2fe-a015790f3456", 00:10:14.613 "strip_size_kb": 0, 00:10:14.613 "state": "online", 00:10:14.613 "raid_level": "raid1", 00:10:14.613 "superblock": false, 00:10:14.613 "num_base_bdevs": 3, 00:10:14.613 "num_base_bdevs_discovered": 3, 00:10:14.613 "num_base_bdevs_operational": 3, 00:10:14.613 "base_bdevs_list": [ 00:10:14.613 { 00:10:14.613 "name": "NewBaseBdev", 00:10:14.613 "uuid": "82822f82-9eb0-4f18-8913-583968efd92c", 00:10:14.613 "is_configured": true, 00:10:14.613 "data_offset": 0, 00:10:14.613 "data_size": 65536 00:10:14.613 }, 00:10:14.613 { 00:10:14.613 "name": "BaseBdev2", 00:10:14.613 "uuid": "00c4a1f6-7ea9-4555-b320-d4b8c5b286cb", 00:10:14.614 "is_configured": true, 00:10:14.614 "data_offset": 0, 00:10:14.614 "data_size": 65536 00:10:14.614 }, 00:10:14.614 { 00:10:14.614 "name": "BaseBdev3", 00:10:14.614 "uuid": "c5547057-9658-41e6-bcdc-866e2f1f636b", 00:10:14.614 "is_configured": true, 00:10:14.614 "data_offset": 0, 00:10:14.614 "data_size": 65536 00:10:14.614 } 00:10:14.614 ] 00:10:14.614 } 00:10:14.614 } 00:10:14.614 }' 00:10:14.614 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:14.873 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:14.873 BaseBdev2 00:10:14.873 BaseBdev3' 00:10:14.873 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.873 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.874 [2024-11-25 15:37:13.536110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.874 [2024-11-25 15:37:13.536182] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.874 [2024-11-25 15:37:13.536294] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.874 [2024-11-25 15:37:13.536595] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.874 [2024-11-25 15:37:13.536648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67168 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67168 ']' 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67168 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.874 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67168 00:10:15.134 killing process with pid 67168 00:10:15.134 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.134 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.134 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67168' 00:10:15.134 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67168 00:10:15.134 [2024-11-25 15:37:13.580891] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.134 15:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67168 00:10:15.395 [2024-11-25 15:37:13.875193] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:16.423 15:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:16.423 ************************************ 00:10:16.423 END TEST raid_state_function_test 00:10:16.423 ************************************ 00:10:16.423 00:10:16.423 real 0m10.324s 00:10:16.423 user 0m16.481s 00:10:16.423 sys 0m1.753s 00:10:16.423 15:37:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.423 15:37:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.423 15:37:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:16.423 15:37:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:16.423 15:37:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.423 15:37:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:16.423 ************************************ 00:10:16.423 START TEST raid_state_function_test_sb 00:10:16.423 ************************************ 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:16.423 Process raid pid: 67789 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67789 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67789' 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67789 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67789 ']' 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.423 15:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.683 [2024-11-25 15:37:15.111658] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:10:16.683 [2024-11-25 15:37:15.111785] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.683 [2024-11-25 15:37:15.287622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.943 [2024-11-25 15:37:15.395572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.943 [2024-11-25 15:37:15.600470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.943 [2024-11-25 15:37:15.600496] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.512 [2024-11-25 15:37:15.940023] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.512 [2024-11-25 15:37:15.940131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.512 [2024-11-25 15:37:15.940165] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:17.512 [2024-11-25 15:37:15.940189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.512 [2024-11-25 15:37:15.940222] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:17.512 [2024-11-25 15:37:15.940250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.512 "name": "Existed_Raid", 00:10:17.512 "uuid": "31d2e6b0-a58c-431a-80ef-c1b561f3c435", 00:10:17.512 "strip_size_kb": 0, 00:10:17.512 "state": "configuring", 00:10:17.512 "raid_level": "raid1", 00:10:17.512 "superblock": true, 00:10:17.512 "num_base_bdevs": 3, 00:10:17.512 "num_base_bdevs_discovered": 0, 00:10:17.512 "num_base_bdevs_operational": 3, 00:10:17.512 "base_bdevs_list": [ 00:10:17.512 { 00:10:17.512 "name": "BaseBdev1", 00:10:17.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.512 "is_configured": false, 00:10:17.512 "data_offset": 0, 00:10:17.512 "data_size": 0 00:10:17.512 }, 00:10:17.512 { 00:10:17.512 "name": "BaseBdev2", 00:10:17.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.512 "is_configured": false, 00:10:17.512 "data_offset": 0, 00:10:17.512 "data_size": 0 00:10:17.512 }, 00:10:17.512 { 00:10:17.512 "name": "BaseBdev3", 00:10:17.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.512 "is_configured": false, 00:10:17.512 "data_offset": 0, 00:10:17.512 "data_size": 0 00:10:17.512 } 00:10:17.512 ] 00:10:17.512 }' 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.512 15:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.772 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:17.772 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.772 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.772 [2024-11-25 15:37:16.411142] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.772 [2024-11-25 15:37:16.411221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:17.772 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.772 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:17.772 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.772 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.772 [2024-11-25 15:37:16.423119] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.772 [2024-11-25 15:37:16.423199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.772 [2024-11-25 15:37:16.423243] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:17.772 [2024-11-25 15:37:16.423266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.772 [2024-11-25 15:37:16.423284] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:17.772 [2024-11-25 15:37:16.423305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:17.772 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.772 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:17.772 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.772 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.032 [2024-11-25 15:37:16.469873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.032 BaseBdev1 00:10:18.032 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.032 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:18.032 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:18.032 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.032 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:18.032 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.032 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.032 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.033 [ 00:10:18.033 { 00:10:18.033 "name": "BaseBdev1", 00:10:18.033 "aliases": [ 00:10:18.033 "45291336-815f-4ee3-98fe-c1450fb9fafb" 00:10:18.033 ], 00:10:18.033 "product_name": "Malloc disk", 00:10:18.033 "block_size": 512, 00:10:18.033 "num_blocks": 65536, 00:10:18.033 "uuid": "45291336-815f-4ee3-98fe-c1450fb9fafb", 00:10:18.033 "assigned_rate_limits": { 00:10:18.033 "rw_ios_per_sec": 0, 00:10:18.033 "rw_mbytes_per_sec": 0, 00:10:18.033 "r_mbytes_per_sec": 0, 00:10:18.033 "w_mbytes_per_sec": 0 00:10:18.033 }, 00:10:18.033 "claimed": true, 00:10:18.033 "claim_type": "exclusive_write", 00:10:18.033 "zoned": false, 00:10:18.033 "supported_io_types": { 00:10:18.033 "read": true, 00:10:18.033 "write": true, 00:10:18.033 "unmap": true, 00:10:18.033 "flush": true, 00:10:18.033 "reset": true, 00:10:18.033 "nvme_admin": false, 00:10:18.033 "nvme_io": false, 00:10:18.033 "nvme_io_md": false, 00:10:18.033 "write_zeroes": true, 00:10:18.033 "zcopy": true, 00:10:18.033 "get_zone_info": false, 00:10:18.033 "zone_management": false, 00:10:18.033 "zone_append": false, 00:10:18.033 "compare": false, 00:10:18.033 "compare_and_write": false, 00:10:18.033 "abort": true, 00:10:18.033 "seek_hole": false, 00:10:18.033 "seek_data": false, 00:10:18.033 "copy": true, 00:10:18.033 "nvme_iov_md": false 00:10:18.033 }, 00:10:18.033 "memory_domains": [ 00:10:18.033 { 00:10:18.033 "dma_device_id": "system", 00:10:18.033 "dma_device_type": 1 00:10:18.033 }, 00:10:18.033 { 00:10:18.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.033 "dma_device_type": 2 00:10:18.033 } 00:10:18.033 ], 00:10:18.033 "driver_specific": {} 00:10:18.033 } 00:10:18.033 ] 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.033 "name": "Existed_Raid", 00:10:18.033 "uuid": "4daadb30-8750-4f76-8919-3bc76599bd54", 00:10:18.033 "strip_size_kb": 0, 00:10:18.033 "state": "configuring", 00:10:18.033 "raid_level": "raid1", 00:10:18.033 "superblock": true, 00:10:18.033 "num_base_bdevs": 3, 00:10:18.033 "num_base_bdevs_discovered": 1, 00:10:18.033 "num_base_bdevs_operational": 3, 00:10:18.033 "base_bdevs_list": [ 00:10:18.033 { 00:10:18.033 "name": "BaseBdev1", 00:10:18.033 "uuid": "45291336-815f-4ee3-98fe-c1450fb9fafb", 00:10:18.033 "is_configured": true, 00:10:18.033 "data_offset": 2048, 00:10:18.033 "data_size": 63488 00:10:18.033 }, 00:10:18.033 { 00:10:18.033 "name": "BaseBdev2", 00:10:18.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.033 "is_configured": false, 00:10:18.033 "data_offset": 0, 00:10:18.033 "data_size": 0 00:10:18.033 }, 00:10:18.033 { 00:10:18.033 "name": "BaseBdev3", 00:10:18.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.033 "is_configured": false, 00:10:18.033 "data_offset": 0, 00:10:18.033 "data_size": 0 00:10:18.033 } 00:10:18.033 ] 00:10:18.033 }' 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.033 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.603 [2024-11-25 15:37:16.981035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:18.603 [2024-11-25 15:37:16.981125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.603 [2024-11-25 15:37:16.993067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.603 [2024-11-25 15:37:16.994891] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:18.603 [2024-11-25 15:37:16.994979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:18.603 [2024-11-25 15:37:16.995017] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:18.603 [2024-11-25 15:37:16.995042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.603 15:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.603 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.603 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.603 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.603 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.603 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.603 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.603 "name": "Existed_Raid", 00:10:18.603 "uuid": "5be9d44a-4397-44da-89f2-51cb7902f21f", 00:10:18.603 "strip_size_kb": 0, 00:10:18.603 "state": "configuring", 00:10:18.603 "raid_level": "raid1", 00:10:18.603 "superblock": true, 00:10:18.603 "num_base_bdevs": 3, 00:10:18.603 "num_base_bdevs_discovered": 1, 00:10:18.603 "num_base_bdevs_operational": 3, 00:10:18.603 "base_bdevs_list": [ 00:10:18.603 { 00:10:18.603 "name": "BaseBdev1", 00:10:18.603 "uuid": "45291336-815f-4ee3-98fe-c1450fb9fafb", 00:10:18.603 "is_configured": true, 00:10:18.603 "data_offset": 2048, 00:10:18.603 "data_size": 63488 00:10:18.603 }, 00:10:18.604 { 00:10:18.604 "name": "BaseBdev2", 00:10:18.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.604 "is_configured": false, 00:10:18.604 "data_offset": 0, 00:10:18.604 "data_size": 0 00:10:18.604 }, 00:10:18.604 { 00:10:18.604 "name": "BaseBdev3", 00:10:18.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.604 "is_configured": false, 00:10:18.604 "data_offset": 0, 00:10:18.604 "data_size": 0 00:10:18.604 } 00:10:18.604 ] 00:10:18.604 }' 00:10:18.604 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.604 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.864 [2024-11-25 15:37:17.456549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.864 BaseBdev2 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.864 [ 00:10:18.864 { 00:10:18.864 "name": "BaseBdev2", 00:10:18.864 "aliases": [ 00:10:18.864 "e7cd4222-cad0-4405-b149-729a1f21adef" 00:10:18.864 ], 00:10:18.864 "product_name": "Malloc disk", 00:10:18.864 "block_size": 512, 00:10:18.864 "num_blocks": 65536, 00:10:18.864 "uuid": "e7cd4222-cad0-4405-b149-729a1f21adef", 00:10:18.864 "assigned_rate_limits": { 00:10:18.864 "rw_ios_per_sec": 0, 00:10:18.864 "rw_mbytes_per_sec": 0, 00:10:18.864 "r_mbytes_per_sec": 0, 00:10:18.864 "w_mbytes_per_sec": 0 00:10:18.864 }, 00:10:18.864 "claimed": true, 00:10:18.864 "claim_type": "exclusive_write", 00:10:18.864 "zoned": false, 00:10:18.864 "supported_io_types": { 00:10:18.864 "read": true, 00:10:18.864 "write": true, 00:10:18.864 "unmap": true, 00:10:18.864 "flush": true, 00:10:18.864 "reset": true, 00:10:18.864 "nvme_admin": false, 00:10:18.864 "nvme_io": false, 00:10:18.864 "nvme_io_md": false, 00:10:18.864 "write_zeroes": true, 00:10:18.864 "zcopy": true, 00:10:18.864 "get_zone_info": false, 00:10:18.864 "zone_management": false, 00:10:18.864 "zone_append": false, 00:10:18.864 "compare": false, 00:10:18.864 "compare_and_write": false, 00:10:18.864 "abort": true, 00:10:18.864 "seek_hole": false, 00:10:18.864 "seek_data": false, 00:10:18.864 "copy": true, 00:10:18.864 "nvme_iov_md": false 00:10:18.864 }, 00:10:18.864 "memory_domains": [ 00:10:18.864 { 00:10:18.864 "dma_device_id": "system", 00:10:18.864 "dma_device_type": 1 00:10:18.864 }, 00:10:18.864 { 00:10:18.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.864 "dma_device_type": 2 00:10:18.864 } 00:10:18.864 ], 00:10:18.864 "driver_specific": {} 00:10:18.864 } 00:10:18.864 ] 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.864 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.123 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.123 "name": "Existed_Raid", 00:10:19.123 "uuid": "5be9d44a-4397-44da-89f2-51cb7902f21f", 00:10:19.123 "strip_size_kb": 0, 00:10:19.123 "state": "configuring", 00:10:19.123 "raid_level": "raid1", 00:10:19.123 "superblock": true, 00:10:19.123 "num_base_bdevs": 3, 00:10:19.123 "num_base_bdevs_discovered": 2, 00:10:19.123 "num_base_bdevs_operational": 3, 00:10:19.123 "base_bdevs_list": [ 00:10:19.123 { 00:10:19.123 "name": "BaseBdev1", 00:10:19.123 "uuid": "45291336-815f-4ee3-98fe-c1450fb9fafb", 00:10:19.123 "is_configured": true, 00:10:19.123 "data_offset": 2048, 00:10:19.123 "data_size": 63488 00:10:19.123 }, 00:10:19.123 { 00:10:19.123 "name": "BaseBdev2", 00:10:19.123 "uuid": "e7cd4222-cad0-4405-b149-729a1f21adef", 00:10:19.123 "is_configured": true, 00:10:19.123 "data_offset": 2048, 00:10:19.123 "data_size": 63488 00:10:19.123 }, 00:10:19.123 { 00:10:19.123 "name": "BaseBdev3", 00:10:19.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.124 "is_configured": false, 00:10:19.124 "data_offset": 0, 00:10:19.124 "data_size": 0 00:10:19.124 } 00:10:19.124 ] 00:10:19.124 }' 00:10:19.124 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.124 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.384 15:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:19.384 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.384 15:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.384 [2024-11-25 15:37:18.000297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.384 [2024-11-25 15:37:18.000617] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:19.384 [2024-11-25 15:37:18.000679] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:19.384 [2024-11-25 15:37:18.000986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:19.384 BaseBdev3 00:10:19.384 [2024-11-25 15:37:18.001202] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:19.384 [2024-11-25 15:37:18.001214] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:19.384 [2024-11-25 15:37:18.001377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.384 [ 00:10:19.384 { 00:10:19.384 "name": "BaseBdev3", 00:10:19.384 "aliases": [ 00:10:19.384 "d40a88b6-173a-4689-99b4-b580f0269ed8" 00:10:19.384 ], 00:10:19.384 "product_name": "Malloc disk", 00:10:19.384 "block_size": 512, 00:10:19.384 "num_blocks": 65536, 00:10:19.384 "uuid": "d40a88b6-173a-4689-99b4-b580f0269ed8", 00:10:19.384 "assigned_rate_limits": { 00:10:19.384 "rw_ios_per_sec": 0, 00:10:19.384 "rw_mbytes_per_sec": 0, 00:10:19.384 "r_mbytes_per_sec": 0, 00:10:19.384 "w_mbytes_per_sec": 0 00:10:19.384 }, 00:10:19.384 "claimed": true, 00:10:19.384 "claim_type": "exclusive_write", 00:10:19.384 "zoned": false, 00:10:19.384 "supported_io_types": { 00:10:19.384 "read": true, 00:10:19.384 "write": true, 00:10:19.384 "unmap": true, 00:10:19.384 "flush": true, 00:10:19.384 "reset": true, 00:10:19.384 "nvme_admin": false, 00:10:19.384 "nvme_io": false, 00:10:19.384 "nvme_io_md": false, 00:10:19.384 "write_zeroes": true, 00:10:19.384 "zcopy": true, 00:10:19.384 "get_zone_info": false, 00:10:19.384 "zone_management": false, 00:10:19.384 "zone_append": false, 00:10:19.384 "compare": false, 00:10:19.384 "compare_and_write": false, 00:10:19.384 "abort": true, 00:10:19.384 "seek_hole": false, 00:10:19.384 "seek_data": false, 00:10:19.384 "copy": true, 00:10:19.384 "nvme_iov_md": false 00:10:19.384 }, 00:10:19.384 "memory_domains": [ 00:10:19.384 { 00:10:19.384 "dma_device_id": "system", 00:10:19.384 "dma_device_type": 1 00:10:19.384 }, 00:10:19.384 { 00:10:19.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.384 "dma_device_type": 2 00:10:19.384 } 00:10:19.384 ], 00:10:19.384 "driver_specific": {} 00:10:19.384 } 00:10:19.384 ] 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.384 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.645 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.645 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.645 "name": "Existed_Raid", 00:10:19.645 "uuid": "5be9d44a-4397-44da-89f2-51cb7902f21f", 00:10:19.645 "strip_size_kb": 0, 00:10:19.645 "state": "online", 00:10:19.645 "raid_level": "raid1", 00:10:19.645 "superblock": true, 00:10:19.645 "num_base_bdevs": 3, 00:10:19.645 "num_base_bdevs_discovered": 3, 00:10:19.645 "num_base_bdevs_operational": 3, 00:10:19.645 "base_bdevs_list": [ 00:10:19.645 { 00:10:19.645 "name": "BaseBdev1", 00:10:19.645 "uuid": "45291336-815f-4ee3-98fe-c1450fb9fafb", 00:10:19.645 "is_configured": true, 00:10:19.645 "data_offset": 2048, 00:10:19.645 "data_size": 63488 00:10:19.645 }, 00:10:19.645 { 00:10:19.645 "name": "BaseBdev2", 00:10:19.645 "uuid": "e7cd4222-cad0-4405-b149-729a1f21adef", 00:10:19.645 "is_configured": true, 00:10:19.645 "data_offset": 2048, 00:10:19.645 "data_size": 63488 00:10:19.645 }, 00:10:19.645 { 00:10:19.645 "name": "BaseBdev3", 00:10:19.645 "uuid": "d40a88b6-173a-4689-99b4-b580f0269ed8", 00:10:19.645 "is_configured": true, 00:10:19.645 "data_offset": 2048, 00:10:19.645 "data_size": 63488 00:10:19.645 } 00:10:19.645 ] 00:10:19.645 }' 00:10:19.645 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.645 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.905 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:19.905 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:19.905 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.905 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.905 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.905 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.905 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:19.905 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.905 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.905 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.905 [2024-11-25 15:37:18.507774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.905 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.905 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:19.905 "name": "Existed_Raid", 00:10:19.905 "aliases": [ 00:10:19.905 "5be9d44a-4397-44da-89f2-51cb7902f21f" 00:10:19.905 ], 00:10:19.905 "product_name": "Raid Volume", 00:10:19.905 "block_size": 512, 00:10:19.905 "num_blocks": 63488, 00:10:19.905 "uuid": "5be9d44a-4397-44da-89f2-51cb7902f21f", 00:10:19.905 "assigned_rate_limits": { 00:10:19.905 "rw_ios_per_sec": 0, 00:10:19.905 "rw_mbytes_per_sec": 0, 00:10:19.905 "r_mbytes_per_sec": 0, 00:10:19.905 "w_mbytes_per_sec": 0 00:10:19.905 }, 00:10:19.905 "claimed": false, 00:10:19.905 "zoned": false, 00:10:19.905 "supported_io_types": { 00:10:19.905 "read": true, 00:10:19.905 "write": true, 00:10:19.905 "unmap": false, 00:10:19.905 "flush": false, 00:10:19.905 "reset": true, 00:10:19.905 "nvme_admin": false, 00:10:19.905 "nvme_io": false, 00:10:19.905 "nvme_io_md": false, 00:10:19.905 "write_zeroes": true, 00:10:19.905 "zcopy": false, 00:10:19.905 "get_zone_info": false, 00:10:19.905 "zone_management": false, 00:10:19.905 "zone_append": false, 00:10:19.905 "compare": false, 00:10:19.905 "compare_and_write": false, 00:10:19.905 "abort": false, 00:10:19.905 "seek_hole": false, 00:10:19.905 "seek_data": false, 00:10:19.905 "copy": false, 00:10:19.905 "nvme_iov_md": false 00:10:19.905 }, 00:10:19.905 "memory_domains": [ 00:10:19.905 { 00:10:19.905 "dma_device_id": "system", 00:10:19.905 "dma_device_type": 1 00:10:19.905 }, 00:10:19.905 { 00:10:19.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.905 "dma_device_type": 2 00:10:19.905 }, 00:10:19.905 { 00:10:19.905 "dma_device_id": "system", 00:10:19.905 "dma_device_type": 1 00:10:19.905 }, 00:10:19.905 { 00:10:19.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.905 "dma_device_type": 2 00:10:19.905 }, 00:10:19.905 { 00:10:19.905 "dma_device_id": "system", 00:10:19.905 "dma_device_type": 1 00:10:19.905 }, 00:10:19.905 { 00:10:19.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.905 "dma_device_type": 2 00:10:19.905 } 00:10:19.905 ], 00:10:19.905 "driver_specific": { 00:10:19.905 "raid": { 00:10:19.905 "uuid": "5be9d44a-4397-44da-89f2-51cb7902f21f", 00:10:19.905 "strip_size_kb": 0, 00:10:19.905 "state": "online", 00:10:19.905 "raid_level": "raid1", 00:10:19.905 "superblock": true, 00:10:19.905 "num_base_bdevs": 3, 00:10:19.905 "num_base_bdevs_discovered": 3, 00:10:19.905 "num_base_bdevs_operational": 3, 00:10:19.905 "base_bdevs_list": [ 00:10:19.905 { 00:10:19.905 "name": "BaseBdev1", 00:10:19.905 "uuid": "45291336-815f-4ee3-98fe-c1450fb9fafb", 00:10:19.905 "is_configured": true, 00:10:19.905 "data_offset": 2048, 00:10:19.905 "data_size": 63488 00:10:19.905 }, 00:10:19.905 { 00:10:19.905 "name": "BaseBdev2", 00:10:19.905 "uuid": "e7cd4222-cad0-4405-b149-729a1f21adef", 00:10:19.905 "is_configured": true, 00:10:19.905 "data_offset": 2048, 00:10:19.905 "data_size": 63488 00:10:19.905 }, 00:10:19.905 { 00:10:19.905 "name": "BaseBdev3", 00:10:19.905 "uuid": "d40a88b6-173a-4689-99b4-b580f0269ed8", 00:10:19.905 "is_configured": true, 00:10:19.905 "data_offset": 2048, 00:10:19.905 "data_size": 63488 00:10:19.905 } 00:10:19.905 ] 00:10:19.905 } 00:10:19.905 } 00:10:19.905 }' 00:10:19.905 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:20.165 BaseBdev2 00:10:20.165 BaseBdev3' 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.165 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.165 [2024-11-25 15:37:18.803027] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.424 "name": "Existed_Raid", 00:10:20.424 "uuid": "5be9d44a-4397-44da-89f2-51cb7902f21f", 00:10:20.424 "strip_size_kb": 0, 00:10:20.424 "state": "online", 00:10:20.424 "raid_level": "raid1", 00:10:20.424 "superblock": true, 00:10:20.424 "num_base_bdevs": 3, 00:10:20.424 "num_base_bdevs_discovered": 2, 00:10:20.424 "num_base_bdevs_operational": 2, 00:10:20.424 "base_bdevs_list": [ 00:10:20.424 { 00:10:20.424 "name": null, 00:10:20.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.424 "is_configured": false, 00:10:20.424 "data_offset": 0, 00:10:20.424 "data_size": 63488 00:10:20.424 }, 00:10:20.424 { 00:10:20.424 "name": "BaseBdev2", 00:10:20.424 "uuid": "e7cd4222-cad0-4405-b149-729a1f21adef", 00:10:20.424 "is_configured": true, 00:10:20.424 "data_offset": 2048, 00:10:20.424 "data_size": 63488 00:10:20.424 }, 00:10:20.424 { 00:10:20.424 "name": "BaseBdev3", 00:10:20.424 "uuid": "d40a88b6-173a-4689-99b4-b580f0269ed8", 00:10:20.424 "is_configured": true, 00:10:20.424 "data_offset": 2048, 00:10:20.424 "data_size": 63488 00:10:20.424 } 00:10:20.424 ] 00:10:20.424 }' 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.424 15:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.684 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:20.684 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:20.684 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:20.684 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.684 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.684 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.684 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.949 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:20.949 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:20.949 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:20.949 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.949 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.949 [2024-11-25 15:37:19.371395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:20.949 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.949 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:20.949 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:20.949 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:20.949 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.949 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.949 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.949 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.949 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:20.949 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:20.949 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:20.949 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.949 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.949 [2024-11-25 15:37:19.507765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:20.950 [2024-11-25 15:37:19.507912] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.950 [2024-11-25 15:37:19.603420] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.950 [2024-11-25 15:37:19.603564] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.950 [2024-11-25 15:37:19.603614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:20.950 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.950 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:20.950 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:20.950 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.950 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:20.950 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.950 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.950 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.210 BaseBdev2 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.210 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.210 [ 00:10:21.210 { 00:10:21.210 "name": "BaseBdev2", 00:10:21.210 "aliases": [ 00:10:21.210 "f7815cce-b777-444f-bbde-42af5f8a6988" 00:10:21.210 ], 00:10:21.210 "product_name": "Malloc disk", 00:10:21.210 "block_size": 512, 00:10:21.210 "num_blocks": 65536, 00:10:21.210 "uuid": "f7815cce-b777-444f-bbde-42af5f8a6988", 00:10:21.210 "assigned_rate_limits": { 00:10:21.210 "rw_ios_per_sec": 0, 00:10:21.210 "rw_mbytes_per_sec": 0, 00:10:21.210 "r_mbytes_per_sec": 0, 00:10:21.210 "w_mbytes_per_sec": 0 00:10:21.210 }, 00:10:21.210 "claimed": false, 00:10:21.210 "zoned": false, 00:10:21.210 "supported_io_types": { 00:10:21.210 "read": true, 00:10:21.210 "write": true, 00:10:21.210 "unmap": true, 00:10:21.210 "flush": true, 00:10:21.210 "reset": true, 00:10:21.210 "nvme_admin": false, 00:10:21.210 "nvme_io": false, 00:10:21.210 "nvme_io_md": false, 00:10:21.210 "write_zeroes": true, 00:10:21.210 "zcopy": true, 00:10:21.210 "get_zone_info": false, 00:10:21.210 "zone_management": false, 00:10:21.210 "zone_append": false, 00:10:21.210 "compare": false, 00:10:21.210 "compare_and_write": false, 00:10:21.210 "abort": true, 00:10:21.210 "seek_hole": false, 00:10:21.210 "seek_data": false, 00:10:21.210 "copy": true, 00:10:21.210 "nvme_iov_md": false 00:10:21.210 }, 00:10:21.211 "memory_domains": [ 00:10:21.211 { 00:10:21.211 "dma_device_id": "system", 00:10:21.211 "dma_device_type": 1 00:10:21.211 }, 00:10:21.211 { 00:10:21.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.211 "dma_device_type": 2 00:10:21.211 } 00:10:21.211 ], 00:10:21.211 "driver_specific": {} 00:10:21.211 } 00:10:21.211 ] 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.211 BaseBdev3 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.211 [ 00:10:21.211 { 00:10:21.211 "name": "BaseBdev3", 00:10:21.211 "aliases": [ 00:10:21.211 "17e378e6-e141-4f4f-b7a9-1e6c79570797" 00:10:21.211 ], 00:10:21.211 "product_name": "Malloc disk", 00:10:21.211 "block_size": 512, 00:10:21.211 "num_blocks": 65536, 00:10:21.211 "uuid": "17e378e6-e141-4f4f-b7a9-1e6c79570797", 00:10:21.211 "assigned_rate_limits": { 00:10:21.211 "rw_ios_per_sec": 0, 00:10:21.211 "rw_mbytes_per_sec": 0, 00:10:21.211 "r_mbytes_per_sec": 0, 00:10:21.211 "w_mbytes_per_sec": 0 00:10:21.211 }, 00:10:21.211 "claimed": false, 00:10:21.211 "zoned": false, 00:10:21.211 "supported_io_types": { 00:10:21.211 "read": true, 00:10:21.211 "write": true, 00:10:21.211 "unmap": true, 00:10:21.211 "flush": true, 00:10:21.211 "reset": true, 00:10:21.211 "nvme_admin": false, 00:10:21.211 "nvme_io": false, 00:10:21.211 "nvme_io_md": false, 00:10:21.211 "write_zeroes": true, 00:10:21.211 "zcopy": true, 00:10:21.211 "get_zone_info": false, 00:10:21.211 "zone_management": false, 00:10:21.211 "zone_append": false, 00:10:21.211 "compare": false, 00:10:21.211 "compare_and_write": false, 00:10:21.211 "abort": true, 00:10:21.211 "seek_hole": false, 00:10:21.211 "seek_data": false, 00:10:21.211 "copy": true, 00:10:21.211 "nvme_iov_md": false 00:10:21.211 }, 00:10:21.211 "memory_domains": [ 00:10:21.211 { 00:10:21.211 "dma_device_id": "system", 00:10:21.211 "dma_device_type": 1 00:10:21.211 }, 00:10:21.211 { 00:10:21.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.211 "dma_device_type": 2 00:10:21.211 } 00:10:21.211 ], 00:10:21.211 "driver_specific": {} 00:10:21.211 } 00:10:21.211 ] 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.211 [2024-11-25 15:37:19.806328] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:21.211 [2024-11-25 15:37:19.806420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:21.211 [2024-11-25 15:37:19.806462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.211 [2024-11-25 15:37:19.808231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.211 "name": "Existed_Raid", 00:10:21.211 "uuid": "614d412e-111f-4568-ba5e-dbf8acde438e", 00:10:21.211 "strip_size_kb": 0, 00:10:21.211 "state": "configuring", 00:10:21.211 "raid_level": "raid1", 00:10:21.211 "superblock": true, 00:10:21.211 "num_base_bdevs": 3, 00:10:21.211 "num_base_bdevs_discovered": 2, 00:10:21.211 "num_base_bdevs_operational": 3, 00:10:21.211 "base_bdevs_list": [ 00:10:21.211 { 00:10:21.211 "name": "BaseBdev1", 00:10:21.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.211 "is_configured": false, 00:10:21.211 "data_offset": 0, 00:10:21.211 "data_size": 0 00:10:21.211 }, 00:10:21.211 { 00:10:21.211 "name": "BaseBdev2", 00:10:21.211 "uuid": "f7815cce-b777-444f-bbde-42af5f8a6988", 00:10:21.211 "is_configured": true, 00:10:21.211 "data_offset": 2048, 00:10:21.211 "data_size": 63488 00:10:21.211 }, 00:10:21.211 { 00:10:21.211 "name": "BaseBdev3", 00:10:21.211 "uuid": "17e378e6-e141-4f4f-b7a9-1e6c79570797", 00:10:21.211 "is_configured": true, 00:10:21.211 "data_offset": 2048, 00:10:21.211 "data_size": 63488 00:10:21.211 } 00:10:21.211 ] 00:10:21.211 }' 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.211 15:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.781 [2024-11-25 15:37:20.281521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.781 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.781 "name": "Existed_Raid", 00:10:21.781 "uuid": "614d412e-111f-4568-ba5e-dbf8acde438e", 00:10:21.782 "strip_size_kb": 0, 00:10:21.782 "state": "configuring", 00:10:21.782 "raid_level": "raid1", 00:10:21.782 "superblock": true, 00:10:21.782 "num_base_bdevs": 3, 00:10:21.782 "num_base_bdevs_discovered": 1, 00:10:21.782 "num_base_bdevs_operational": 3, 00:10:21.782 "base_bdevs_list": [ 00:10:21.782 { 00:10:21.782 "name": "BaseBdev1", 00:10:21.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.782 "is_configured": false, 00:10:21.782 "data_offset": 0, 00:10:21.782 "data_size": 0 00:10:21.782 }, 00:10:21.782 { 00:10:21.782 "name": null, 00:10:21.782 "uuid": "f7815cce-b777-444f-bbde-42af5f8a6988", 00:10:21.782 "is_configured": false, 00:10:21.782 "data_offset": 0, 00:10:21.782 "data_size": 63488 00:10:21.782 }, 00:10:21.782 { 00:10:21.782 "name": "BaseBdev3", 00:10:21.782 "uuid": "17e378e6-e141-4f4f-b7a9-1e6c79570797", 00:10:21.782 "is_configured": true, 00:10:21.782 "data_offset": 2048, 00:10:21.782 "data_size": 63488 00:10:21.782 } 00:10:21.782 ] 00:10:21.782 }' 00:10:21.782 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.782 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.049 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.049 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:22.049 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.049 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.049 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.317 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:22.317 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:22.317 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.318 [2024-11-25 15:37:20.776301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.318 BaseBdev1 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.318 [ 00:10:22.318 { 00:10:22.318 "name": "BaseBdev1", 00:10:22.318 "aliases": [ 00:10:22.318 "9a897501-bb2f-4779-93e9-9e1a61d9a613" 00:10:22.318 ], 00:10:22.318 "product_name": "Malloc disk", 00:10:22.318 "block_size": 512, 00:10:22.318 "num_blocks": 65536, 00:10:22.318 "uuid": "9a897501-bb2f-4779-93e9-9e1a61d9a613", 00:10:22.318 "assigned_rate_limits": { 00:10:22.318 "rw_ios_per_sec": 0, 00:10:22.318 "rw_mbytes_per_sec": 0, 00:10:22.318 "r_mbytes_per_sec": 0, 00:10:22.318 "w_mbytes_per_sec": 0 00:10:22.318 }, 00:10:22.318 "claimed": true, 00:10:22.318 "claim_type": "exclusive_write", 00:10:22.318 "zoned": false, 00:10:22.318 "supported_io_types": { 00:10:22.318 "read": true, 00:10:22.318 "write": true, 00:10:22.318 "unmap": true, 00:10:22.318 "flush": true, 00:10:22.318 "reset": true, 00:10:22.318 "nvme_admin": false, 00:10:22.318 "nvme_io": false, 00:10:22.318 "nvme_io_md": false, 00:10:22.318 "write_zeroes": true, 00:10:22.318 "zcopy": true, 00:10:22.318 "get_zone_info": false, 00:10:22.318 "zone_management": false, 00:10:22.318 "zone_append": false, 00:10:22.318 "compare": false, 00:10:22.318 "compare_and_write": false, 00:10:22.318 "abort": true, 00:10:22.318 "seek_hole": false, 00:10:22.318 "seek_data": false, 00:10:22.318 "copy": true, 00:10:22.318 "nvme_iov_md": false 00:10:22.318 }, 00:10:22.318 "memory_domains": [ 00:10:22.318 { 00:10:22.318 "dma_device_id": "system", 00:10:22.318 "dma_device_type": 1 00:10:22.318 }, 00:10:22.318 { 00:10:22.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.318 "dma_device_type": 2 00:10:22.318 } 00:10:22.318 ], 00:10:22.318 "driver_specific": {} 00:10:22.318 } 00:10:22.318 ] 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.318 "name": "Existed_Raid", 00:10:22.318 "uuid": "614d412e-111f-4568-ba5e-dbf8acde438e", 00:10:22.318 "strip_size_kb": 0, 00:10:22.318 "state": "configuring", 00:10:22.318 "raid_level": "raid1", 00:10:22.318 "superblock": true, 00:10:22.318 "num_base_bdevs": 3, 00:10:22.318 "num_base_bdevs_discovered": 2, 00:10:22.318 "num_base_bdevs_operational": 3, 00:10:22.318 "base_bdevs_list": [ 00:10:22.318 { 00:10:22.318 "name": "BaseBdev1", 00:10:22.318 "uuid": "9a897501-bb2f-4779-93e9-9e1a61d9a613", 00:10:22.318 "is_configured": true, 00:10:22.318 "data_offset": 2048, 00:10:22.318 "data_size": 63488 00:10:22.318 }, 00:10:22.318 { 00:10:22.318 "name": null, 00:10:22.318 "uuid": "f7815cce-b777-444f-bbde-42af5f8a6988", 00:10:22.318 "is_configured": false, 00:10:22.318 "data_offset": 0, 00:10:22.318 "data_size": 63488 00:10:22.318 }, 00:10:22.318 { 00:10:22.318 "name": "BaseBdev3", 00:10:22.318 "uuid": "17e378e6-e141-4f4f-b7a9-1e6c79570797", 00:10:22.318 "is_configured": true, 00:10:22.318 "data_offset": 2048, 00:10:22.318 "data_size": 63488 00:10:22.318 } 00:10:22.318 ] 00:10:22.318 }' 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.318 15:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.888 [2024-11-25 15:37:21.303417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.888 "name": "Existed_Raid", 00:10:22.888 "uuid": "614d412e-111f-4568-ba5e-dbf8acde438e", 00:10:22.888 "strip_size_kb": 0, 00:10:22.888 "state": "configuring", 00:10:22.888 "raid_level": "raid1", 00:10:22.888 "superblock": true, 00:10:22.888 "num_base_bdevs": 3, 00:10:22.888 "num_base_bdevs_discovered": 1, 00:10:22.888 "num_base_bdevs_operational": 3, 00:10:22.888 "base_bdevs_list": [ 00:10:22.888 { 00:10:22.888 "name": "BaseBdev1", 00:10:22.888 "uuid": "9a897501-bb2f-4779-93e9-9e1a61d9a613", 00:10:22.888 "is_configured": true, 00:10:22.888 "data_offset": 2048, 00:10:22.888 "data_size": 63488 00:10:22.888 }, 00:10:22.888 { 00:10:22.888 "name": null, 00:10:22.888 "uuid": "f7815cce-b777-444f-bbde-42af5f8a6988", 00:10:22.888 "is_configured": false, 00:10:22.888 "data_offset": 0, 00:10:22.888 "data_size": 63488 00:10:22.888 }, 00:10:22.888 { 00:10:22.888 "name": null, 00:10:22.888 "uuid": "17e378e6-e141-4f4f-b7a9-1e6c79570797", 00:10:22.888 "is_configured": false, 00:10:22.888 "data_offset": 0, 00:10:22.888 "data_size": 63488 00:10:22.888 } 00:10:22.888 ] 00:10:22.888 }' 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.888 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.148 [2024-11-25 15:37:21.778701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.148 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.408 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.408 "name": "Existed_Raid", 00:10:23.408 "uuid": "614d412e-111f-4568-ba5e-dbf8acde438e", 00:10:23.408 "strip_size_kb": 0, 00:10:23.408 "state": "configuring", 00:10:23.408 "raid_level": "raid1", 00:10:23.408 "superblock": true, 00:10:23.408 "num_base_bdevs": 3, 00:10:23.408 "num_base_bdevs_discovered": 2, 00:10:23.408 "num_base_bdevs_operational": 3, 00:10:23.408 "base_bdevs_list": [ 00:10:23.408 { 00:10:23.408 "name": "BaseBdev1", 00:10:23.408 "uuid": "9a897501-bb2f-4779-93e9-9e1a61d9a613", 00:10:23.408 "is_configured": true, 00:10:23.408 "data_offset": 2048, 00:10:23.408 "data_size": 63488 00:10:23.408 }, 00:10:23.408 { 00:10:23.408 "name": null, 00:10:23.408 "uuid": "f7815cce-b777-444f-bbde-42af5f8a6988", 00:10:23.408 "is_configured": false, 00:10:23.408 "data_offset": 0, 00:10:23.408 "data_size": 63488 00:10:23.408 }, 00:10:23.408 { 00:10:23.408 "name": "BaseBdev3", 00:10:23.408 "uuid": "17e378e6-e141-4f4f-b7a9-1e6c79570797", 00:10:23.408 "is_configured": true, 00:10:23.408 "data_offset": 2048, 00:10:23.408 "data_size": 63488 00:10:23.408 } 00:10:23.408 ] 00:10:23.408 }' 00:10:23.408 15:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.408 15:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.668 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.668 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.668 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.668 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:23.668 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.668 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:23.668 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:23.668 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.668 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.668 [2024-11-25 15:37:22.297757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:23.928 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.928 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:23.928 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.928 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.928 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.928 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.928 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.928 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.928 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.928 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.928 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.928 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.928 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.928 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.928 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.928 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.928 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.928 "name": "Existed_Raid", 00:10:23.928 "uuid": "614d412e-111f-4568-ba5e-dbf8acde438e", 00:10:23.928 "strip_size_kb": 0, 00:10:23.928 "state": "configuring", 00:10:23.928 "raid_level": "raid1", 00:10:23.928 "superblock": true, 00:10:23.928 "num_base_bdevs": 3, 00:10:23.928 "num_base_bdevs_discovered": 1, 00:10:23.928 "num_base_bdevs_operational": 3, 00:10:23.928 "base_bdevs_list": [ 00:10:23.928 { 00:10:23.928 "name": null, 00:10:23.928 "uuid": "9a897501-bb2f-4779-93e9-9e1a61d9a613", 00:10:23.928 "is_configured": false, 00:10:23.928 "data_offset": 0, 00:10:23.928 "data_size": 63488 00:10:23.928 }, 00:10:23.928 { 00:10:23.928 "name": null, 00:10:23.928 "uuid": "f7815cce-b777-444f-bbde-42af5f8a6988", 00:10:23.928 "is_configured": false, 00:10:23.928 "data_offset": 0, 00:10:23.928 "data_size": 63488 00:10:23.928 }, 00:10:23.928 { 00:10:23.928 "name": "BaseBdev3", 00:10:23.928 "uuid": "17e378e6-e141-4f4f-b7a9-1e6c79570797", 00:10:23.928 "is_configured": true, 00:10:23.928 "data_offset": 2048, 00:10:23.928 "data_size": 63488 00:10:23.928 } 00:10:23.928 ] 00:10:23.928 }' 00:10:23.928 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.928 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.188 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.188 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:24.188 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.188 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.188 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.448 [2024-11-25 15:37:22.890283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.448 "name": "Existed_Raid", 00:10:24.448 "uuid": "614d412e-111f-4568-ba5e-dbf8acde438e", 00:10:24.448 "strip_size_kb": 0, 00:10:24.448 "state": "configuring", 00:10:24.448 "raid_level": "raid1", 00:10:24.448 "superblock": true, 00:10:24.448 "num_base_bdevs": 3, 00:10:24.448 "num_base_bdevs_discovered": 2, 00:10:24.448 "num_base_bdevs_operational": 3, 00:10:24.448 "base_bdevs_list": [ 00:10:24.448 { 00:10:24.448 "name": null, 00:10:24.448 "uuid": "9a897501-bb2f-4779-93e9-9e1a61d9a613", 00:10:24.448 "is_configured": false, 00:10:24.448 "data_offset": 0, 00:10:24.448 "data_size": 63488 00:10:24.448 }, 00:10:24.448 { 00:10:24.448 "name": "BaseBdev2", 00:10:24.448 "uuid": "f7815cce-b777-444f-bbde-42af5f8a6988", 00:10:24.448 "is_configured": true, 00:10:24.448 "data_offset": 2048, 00:10:24.448 "data_size": 63488 00:10:24.448 }, 00:10:24.448 { 00:10:24.448 "name": "BaseBdev3", 00:10:24.448 "uuid": "17e378e6-e141-4f4f-b7a9-1e6c79570797", 00:10:24.448 "is_configured": true, 00:10:24.448 "data_offset": 2048, 00:10:24.448 "data_size": 63488 00:10:24.448 } 00:10:24.448 ] 00:10:24.448 }' 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.448 15:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.708 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.708 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.708 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.708 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:24.708 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.708 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:24.708 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.708 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:24.708 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.708 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.968 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.968 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9a897501-bb2f-4779-93e9-9e1a61d9a613 00:10:24.968 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.968 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.968 [2024-11-25 15:37:23.452083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:24.968 [2024-11-25 15:37:23.452386] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:24.968 [2024-11-25 15:37:23.452423] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:24.968 [2024-11-25 15:37:23.452699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:24.968 [2024-11-25 15:37:23.452903] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:24.968 NewBaseBdev 00:10:24.968 [2024-11-25 15:37:23.452948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:24.968 [2024-11-25 15:37:23.453110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.968 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.968 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:24.968 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:24.968 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:24.968 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:24.968 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:24.968 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:24.968 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:24.968 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.968 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.968 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.968 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:24.968 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.968 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.968 [ 00:10:24.968 { 00:10:24.968 "name": "NewBaseBdev", 00:10:24.968 "aliases": [ 00:10:24.968 "9a897501-bb2f-4779-93e9-9e1a61d9a613" 00:10:24.968 ], 00:10:24.968 "product_name": "Malloc disk", 00:10:24.968 "block_size": 512, 00:10:24.968 "num_blocks": 65536, 00:10:24.968 "uuid": "9a897501-bb2f-4779-93e9-9e1a61d9a613", 00:10:24.968 "assigned_rate_limits": { 00:10:24.968 "rw_ios_per_sec": 0, 00:10:24.968 "rw_mbytes_per_sec": 0, 00:10:24.968 "r_mbytes_per_sec": 0, 00:10:24.968 "w_mbytes_per_sec": 0 00:10:24.968 }, 00:10:24.968 "claimed": true, 00:10:24.968 "claim_type": "exclusive_write", 00:10:24.968 "zoned": false, 00:10:24.968 "supported_io_types": { 00:10:24.968 "read": true, 00:10:24.968 "write": true, 00:10:24.968 "unmap": true, 00:10:24.968 "flush": true, 00:10:24.968 "reset": true, 00:10:24.968 "nvme_admin": false, 00:10:24.968 "nvme_io": false, 00:10:24.968 "nvme_io_md": false, 00:10:24.968 "write_zeroes": true, 00:10:24.968 "zcopy": true, 00:10:24.968 "get_zone_info": false, 00:10:24.968 "zone_management": false, 00:10:24.968 "zone_append": false, 00:10:24.968 "compare": false, 00:10:24.968 "compare_and_write": false, 00:10:24.968 "abort": true, 00:10:24.968 "seek_hole": false, 00:10:24.968 "seek_data": false, 00:10:24.968 "copy": true, 00:10:24.968 "nvme_iov_md": false 00:10:24.968 }, 00:10:24.969 "memory_domains": [ 00:10:24.969 { 00:10:24.969 "dma_device_id": "system", 00:10:24.969 "dma_device_type": 1 00:10:24.969 }, 00:10:24.969 { 00:10:24.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.969 "dma_device_type": 2 00:10:24.969 } 00:10:24.969 ], 00:10:24.969 "driver_specific": {} 00:10:24.969 } 00:10:24.969 ] 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.969 "name": "Existed_Raid", 00:10:24.969 "uuid": "614d412e-111f-4568-ba5e-dbf8acde438e", 00:10:24.969 "strip_size_kb": 0, 00:10:24.969 "state": "online", 00:10:24.969 "raid_level": "raid1", 00:10:24.969 "superblock": true, 00:10:24.969 "num_base_bdevs": 3, 00:10:24.969 "num_base_bdevs_discovered": 3, 00:10:24.969 "num_base_bdevs_operational": 3, 00:10:24.969 "base_bdevs_list": [ 00:10:24.969 { 00:10:24.969 "name": "NewBaseBdev", 00:10:24.969 "uuid": "9a897501-bb2f-4779-93e9-9e1a61d9a613", 00:10:24.969 "is_configured": true, 00:10:24.969 "data_offset": 2048, 00:10:24.969 "data_size": 63488 00:10:24.969 }, 00:10:24.969 { 00:10:24.969 "name": "BaseBdev2", 00:10:24.969 "uuid": "f7815cce-b777-444f-bbde-42af5f8a6988", 00:10:24.969 "is_configured": true, 00:10:24.969 "data_offset": 2048, 00:10:24.969 "data_size": 63488 00:10:24.969 }, 00:10:24.969 { 00:10:24.969 "name": "BaseBdev3", 00:10:24.969 "uuid": "17e378e6-e141-4f4f-b7a9-1e6c79570797", 00:10:24.969 "is_configured": true, 00:10:24.969 "data_offset": 2048, 00:10:24.969 "data_size": 63488 00:10:24.969 } 00:10:24.969 ] 00:10:24.969 }' 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.969 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.538 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:25.538 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:25.538 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:25.538 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:25.538 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:25.538 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:25.538 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:25.538 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.538 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.538 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:25.538 [2024-11-25 15:37:23.915590] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.538 15:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.538 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.538 "name": "Existed_Raid", 00:10:25.538 "aliases": [ 00:10:25.538 "614d412e-111f-4568-ba5e-dbf8acde438e" 00:10:25.538 ], 00:10:25.538 "product_name": "Raid Volume", 00:10:25.538 "block_size": 512, 00:10:25.538 "num_blocks": 63488, 00:10:25.538 "uuid": "614d412e-111f-4568-ba5e-dbf8acde438e", 00:10:25.538 "assigned_rate_limits": { 00:10:25.538 "rw_ios_per_sec": 0, 00:10:25.538 "rw_mbytes_per_sec": 0, 00:10:25.538 "r_mbytes_per_sec": 0, 00:10:25.538 "w_mbytes_per_sec": 0 00:10:25.538 }, 00:10:25.538 "claimed": false, 00:10:25.538 "zoned": false, 00:10:25.538 "supported_io_types": { 00:10:25.538 "read": true, 00:10:25.538 "write": true, 00:10:25.538 "unmap": false, 00:10:25.538 "flush": false, 00:10:25.538 "reset": true, 00:10:25.538 "nvme_admin": false, 00:10:25.538 "nvme_io": false, 00:10:25.538 "nvme_io_md": false, 00:10:25.538 "write_zeroes": true, 00:10:25.538 "zcopy": false, 00:10:25.538 "get_zone_info": false, 00:10:25.538 "zone_management": false, 00:10:25.538 "zone_append": false, 00:10:25.538 "compare": false, 00:10:25.538 "compare_and_write": false, 00:10:25.538 "abort": false, 00:10:25.538 "seek_hole": false, 00:10:25.538 "seek_data": false, 00:10:25.538 "copy": false, 00:10:25.538 "nvme_iov_md": false 00:10:25.538 }, 00:10:25.538 "memory_domains": [ 00:10:25.538 { 00:10:25.538 "dma_device_id": "system", 00:10:25.538 "dma_device_type": 1 00:10:25.538 }, 00:10:25.538 { 00:10:25.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.538 "dma_device_type": 2 00:10:25.538 }, 00:10:25.538 { 00:10:25.538 "dma_device_id": "system", 00:10:25.538 "dma_device_type": 1 00:10:25.538 }, 00:10:25.538 { 00:10:25.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.538 "dma_device_type": 2 00:10:25.538 }, 00:10:25.538 { 00:10:25.538 "dma_device_id": "system", 00:10:25.538 "dma_device_type": 1 00:10:25.538 }, 00:10:25.538 { 00:10:25.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.538 "dma_device_type": 2 00:10:25.538 } 00:10:25.538 ], 00:10:25.538 "driver_specific": { 00:10:25.539 "raid": { 00:10:25.539 "uuid": "614d412e-111f-4568-ba5e-dbf8acde438e", 00:10:25.539 "strip_size_kb": 0, 00:10:25.539 "state": "online", 00:10:25.539 "raid_level": "raid1", 00:10:25.539 "superblock": true, 00:10:25.539 "num_base_bdevs": 3, 00:10:25.539 "num_base_bdevs_discovered": 3, 00:10:25.539 "num_base_bdevs_operational": 3, 00:10:25.539 "base_bdevs_list": [ 00:10:25.539 { 00:10:25.539 "name": "NewBaseBdev", 00:10:25.539 "uuid": "9a897501-bb2f-4779-93e9-9e1a61d9a613", 00:10:25.539 "is_configured": true, 00:10:25.539 "data_offset": 2048, 00:10:25.539 "data_size": 63488 00:10:25.539 }, 00:10:25.539 { 00:10:25.539 "name": "BaseBdev2", 00:10:25.539 "uuid": "f7815cce-b777-444f-bbde-42af5f8a6988", 00:10:25.539 "is_configured": true, 00:10:25.539 "data_offset": 2048, 00:10:25.539 "data_size": 63488 00:10:25.539 }, 00:10:25.539 { 00:10:25.539 "name": "BaseBdev3", 00:10:25.539 "uuid": "17e378e6-e141-4f4f-b7a9-1e6c79570797", 00:10:25.539 "is_configured": true, 00:10:25.539 "data_offset": 2048, 00:10:25.539 "data_size": 63488 00:10:25.539 } 00:10:25.539 ] 00:10:25.539 } 00:10:25.539 } 00:10:25.539 }' 00:10:25.539 15:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:25.539 BaseBdev2 00:10:25.539 BaseBdev3' 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.539 [2024-11-25 15:37:24.202858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.539 [2024-11-25 15:37:24.202929] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.539 [2024-11-25 15:37:24.203023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.539 [2024-11-25 15:37:24.203337] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.539 [2024-11-25 15:37:24.203391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67789 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67789 ']' 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67789 00:10:25.539 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:25.799 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.799 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67789 00:10:25.799 killing process with pid 67789 00:10:25.799 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.799 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.799 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67789' 00:10:25.799 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67789 00:10:25.799 [2024-11-25 15:37:24.254218] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.799 15:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67789 00:10:26.058 [2024-11-25 15:37:24.542475] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:26.997 15:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:26.997 00:10:26.997 real 0m10.597s 00:10:26.997 user 0m16.993s 00:10:26.997 sys 0m1.769s 00:10:26.997 ************************************ 00:10:26.997 END TEST raid_state_function_test_sb 00:10:26.997 ************************************ 00:10:26.997 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.997 15:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.997 15:37:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:26.997 15:37:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:26.997 15:37:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.997 15:37:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:27.265 ************************************ 00:10:27.265 START TEST raid_superblock_test 00:10:27.265 ************************************ 00:10:27.265 15:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:27.265 15:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:27.265 15:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68409 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68409 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68409 ']' 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.266 15:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.266 [2024-11-25 15:37:25.771105] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:10:27.266 [2024-11-25 15:37:25.771298] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68409 ] 00:10:27.266 [2024-11-25 15:37:25.942401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.525 [2024-11-25 15:37:26.049177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.783 [2024-11-25 15:37:26.240263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.783 [2024-11-25 15:37:26.240303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.042 malloc1 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.042 [2024-11-25 15:37:26.641173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:28.042 [2024-11-25 15:37:26.641294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.042 [2024-11-25 15:37:26.641338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:28.042 [2024-11-25 15:37:26.641369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.042 [2024-11-25 15:37:26.643684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.042 [2024-11-25 15:37:26.643786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:28.042 pt1 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.042 malloc2 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.042 [2024-11-25 15:37:26.699720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:28.042 [2024-11-25 15:37:26.699831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.042 [2024-11-25 15:37:26.699872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:28.042 [2024-11-25 15:37:26.699902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.042 [2024-11-25 15:37:26.701894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.042 [2024-11-25 15:37:26.701962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:28.042 pt2 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.042 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.301 malloc3 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.301 [2024-11-25 15:37:26.770068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:28.301 [2024-11-25 15:37:26.770123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.301 [2024-11-25 15:37:26.770159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:28.301 [2024-11-25 15:37:26.770167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.301 [2024-11-25 15:37:26.772215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.301 [2024-11-25 15:37:26.772249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:28.301 pt3 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.301 [2024-11-25 15:37:26.782103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:28.301 [2024-11-25 15:37:26.783885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:28.301 [2024-11-25 15:37:26.783954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:28.301 [2024-11-25 15:37:26.784127] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:28.301 [2024-11-25 15:37:26.784157] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:28.301 [2024-11-25 15:37:26.784409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:28.301 [2024-11-25 15:37:26.784597] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:28.301 [2024-11-25 15:37:26.784617] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:28.301 [2024-11-25 15:37:26.784780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.301 "name": "raid_bdev1", 00:10:28.301 "uuid": "f68bf1b1-efb1-4988-8113-0e2002345062", 00:10:28.301 "strip_size_kb": 0, 00:10:28.301 "state": "online", 00:10:28.301 "raid_level": "raid1", 00:10:28.301 "superblock": true, 00:10:28.301 "num_base_bdevs": 3, 00:10:28.301 "num_base_bdevs_discovered": 3, 00:10:28.301 "num_base_bdevs_operational": 3, 00:10:28.301 "base_bdevs_list": [ 00:10:28.301 { 00:10:28.301 "name": "pt1", 00:10:28.301 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:28.301 "is_configured": true, 00:10:28.301 "data_offset": 2048, 00:10:28.301 "data_size": 63488 00:10:28.301 }, 00:10:28.301 { 00:10:28.301 "name": "pt2", 00:10:28.301 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.301 "is_configured": true, 00:10:28.301 "data_offset": 2048, 00:10:28.301 "data_size": 63488 00:10:28.301 }, 00:10:28.301 { 00:10:28.301 "name": "pt3", 00:10:28.301 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.301 "is_configured": true, 00:10:28.301 "data_offset": 2048, 00:10:28.301 "data_size": 63488 00:10:28.301 } 00:10:28.301 ] 00:10:28.301 }' 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.301 15:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.560 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:28.560 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:28.560 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:28.560 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:28.560 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:28.560 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:28.560 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:28.560 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:28.560 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.560 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.560 [2024-11-25 15:37:27.201650] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.560 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.560 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:28.560 "name": "raid_bdev1", 00:10:28.560 "aliases": [ 00:10:28.560 "f68bf1b1-efb1-4988-8113-0e2002345062" 00:10:28.560 ], 00:10:28.560 "product_name": "Raid Volume", 00:10:28.560 "block_size": 512, 00:10:28.560 "num_blocks": 63488, 00:10:28.560 "uuid": "f68bf1b1-efb1-4988-8113-0e2002345062", 00:10:28.560 "assigned_rate_limits": { 00:10:28.560 "rw_ios_per_sec": 0, 00:10:28.560 "rw_mbytes_per_sec": 0, 00:10:28.560 "r_mbytes_per_sec": 0, 00:10:28.560 "w_mbytes_per_sec": 0 00:10:28.560 }, 00:10:28.560 "claimed": false, 00:10:28.560 "zoned": false, 00:10:28.560 "supported_io_types": { 00:10:28.560 "read": true, 00:10:28.560 "write": true, 00:10:28.560 "unmap": false, 00:10:28.560 "flush": false, 00:10:28.560 "reset": true, 00:10:28.560 "nvme_admin": false, 00:10:28.560 "nvme_io": false, 00:10:28.560 "nvme_io_md": false, 00:10:28.560 "write_zeroes": true, 00:10:28.560 "zcopy": false, 00:10:28.560 "get_zone_info": false, 00:10:28.560 "zone_management": false, 00:10:28.560 "zone_append": false, 00:10:28.560 "compare": false, 00:10:28.560 "compare_and_write": false, 00:10:28.560 "abort": false, 00:10:28.560 "seek_hole": false, 00:10:28.560 "seek_data": false, 00:10:28.560 "copy": false, 00:10:28.560 "nvme_iov_md": false 00:10:28.560 }, 00:10:28.560 "memory_domains": [ 00:10:28.560 { 00:10:28.560 "dma_device_id": "system", 00:10:28.560 "dma_device_type": 1 00:10:28.560 }, 00:10:28.560 { 00:10:28.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.560 "dma_device_type": 2 00:10:28.560 }, 00:10:28.560 { 00:10:28.560 "dma_device_id": "system", 00:10:28.560 "dma_device_type": 1 00:10:28.560 }, 00:10:28.560 { 00:10:28.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.560 "dma_device_type": 2 00:10:28.560 }, 00:10:28.560 { 00:10:28.560 "dma_device_id": "system", 00:10:28.560 "dma_device_type": 1 00:10:28.560 }, 00:10:28.560 { 00:10:28.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.560 "dma_device_type": 2 00:10:28.560 } 00:10:28.560 ], 00:10:28.560 "driver_specific": { 00:10:28.560 "raid": { 00:10:28.560 "uuid": "f68bf1b1-efb1-4988-8113-0e2002345062", 00:10:28.560 "strip_size_kb": 0, 00:10:28.560 "state": "online", 00:10:28.560 "raid_level": "raid1", 00:10:28.560 "superblock": true, 00:10:28.560 "num_base_bdevs": 3, 00:10:28.560 "num_base_bdevs_discovered": 3, 00:10:28.560 "num_base_bdevs_operational": 3, 00:10:28.560 "base_bdevs_list": [ 00:10:28.560 { 00:10:28.560 "name": "pt1", 00:10:28.560 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:28.560 "is_configured": true, 00:10:28.560 "data_offset": 2048, 00:10:28.560 "data_size": 63488 00:10:28.560 }, 00:10:28.560 { 00:10:28.560 "name": "pt2", 00:10:28.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.560 "is_configured": true, 00:10:28.560 "data_offset": 2048, 00:10:28.560 "data_size": 63488 00:10:28.560 }, 00:10:28.560 { 00:10:28.560 "name": "pt3", 00:10:28.560 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.560 "is_configured": true, 00:10:28.560 "data_offset": 2048, 00:10:28.560 "data_size": 63488 00:10:28.560 } 00:10:28.560 ] 00:10:28.561 } 00:10:28.561 } 00:10:28.561 }' 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:28.820 pt2 00:10:28.820 pt3' 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.820 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.820 [2024-11-25 15:37:27.485111] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f68bf1b1-efb1-4988-8113-0e2002345062 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f68bf1b1-efb1-4988-8113-0e2002345062 ']' 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.079 [2024-11-25 15:37:27.532735] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:29.079 [2024-11-25 15:37:27.532765] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.079 [2024-11-25 15:37:27.532837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.079 [2024-11-25 15:37:27.532909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.079 [2024-11-25 15:37:27.532919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.079 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.079 [2024-11-25 15:37:27.684533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:29.079 [2024-11-25 15:37:27.686376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:29.079 [2024-11-25 15:37:27.686430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:29.079 [2024-11-25 15:37:27.686476] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:29.079 [2024-11-25 15:37:27.686545] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:29.079 [2024-11-25 15:37:27.686565] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:29.079 [2024-11-25 15:37:27.686581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:29.079 [2024-11-25 15:37:27.686590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:29.079 request: 00:10:29.079 { 00:10:29.079 "name": "raid_bdev1", 00:10:29.079 "raid_level": "raid1", 00:10:29.079 "base_bdevs": [ 00:10:29.079 "malloc1", 00:10:29.079 "malloc2", 00:10:29.079 "malloc3" 00:10:29.079 ], 00:10:29.080 "superblock": false, 00:10:29.080 "method": "bdev_raid_create", 00:10:29.080 "req_id": 1 00:10:29.080 } 00:10:29.080 Got JSON-RPC error response 00:10:29.080 response: 00:10:29.080 { 00:10:29.080 "code": -17, 00:10:29.080 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:29.080 } 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.080 [2024-11-25 15:37:27.732395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:29.080 [2024-11-25 15:37:27.732450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.080 [2024-11-25 15:37:27.732474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:29.080 [2024-11-25 15:37:27.732483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.080 [2024-11-25 15:37:27.734613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.080 [2024-11-25 15:37:27.734649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:29.080 [2024-11-25 15:37:27.734722] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:29.080 [2024-11-25 15:37:27.734775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:29.080 pt1 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.080 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.338 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.338 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.338 "name": "raid_bdev1", 00:10:29.338 "uuid": "f68bf1b1-efb1-4988-8113-0e2002345062", 00:10:29.338 "strip_size_kb": 0, 00:10:29.338 "state": "configuring", 00:10:29.338 "raid_level": "raid1", 00:10:29.338 "superblock": true, 00:10:29.338 "num_base_bdevs": 3, 00:10:29.338 "num_base_bdevs_discovered": 1, 00:10:29.338 "num_base_bdevs_operational": 3, 00:10:29.338 "base_bdevs_list": [ 00:10:29.338 { 00:10:29.338 "name": "pt1", 00:10:29.338 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:29.338 "is_configured": true, 00:10:29.338 "data_offset": 2048, 00:10:29.338 "data_size": 63488 00:10:29.338 }, 00:10:29.338 { 00:10:29.338 "name": null, 00:10:29.338 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:29.338 "is_configured": false, 00:10:29.338 "data_offset": 2048, 00:10:29.338 "data_size": 63488 00:10:29.338 }, 00:10:29.338 { 00:10:29.338 "name": null, 00:10:29.338 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:29.338 "is_configured": false, 00:10:29.338 "data_offset": 2048, 00:10:29.338 "data_size": 63488 00:10:29.338 } 00:10:29.338 ] 00:10:29.338 }' 00:10:29.338 15:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.338 15:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.596 [2024-11-25 15:37:28.147736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:29.596 [2024-11-25 15:37:28.147792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.596 [2024-11-25 15:37:28.147814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:29.596 [2024-11-25 15:37:28.147823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.596 [2024-11-25 15:37:28.148296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.596 [2024-11-25 15:37:28.148323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:29.596 [2024-11-25 15:37:28.148404] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:29.596 [2024-11-25 15:37:28.148432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:29.596 pt2 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.596 [2024-11-25 15:37:28.159736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.596 "name": "raid_bdev1", 00:10:29.596 "uuid": "f68bf1b1-efb1-4988-8113-0e2002345062", 00:10:29.596 "strip_size_kb": 0, 00:10:29.596 "state": "configuring", 00:10:29.596 "raid_level": "raid1", 00:10:29.596 "superblock": true, 00:10:29.596 "num_base_bdevs": 3, 00:10:29.596 "num_base_bdevs_discovered": 1, 00:10:29.596 "num_base_bdevs_operational": 3, 00:10:29.596 "base_bdevs_list": [ 00:10:29.596 { 00:10:29.596 "name": "pt1", 00:10:29.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:29.596 "is_configured": true, 00:10:29.596 "data_offset": 2048, 00:10:29.596 "data_size": 63488 00:10:29.596 }, 00:10:29.596 { 00:10:29.596 "name": null, 00:10:29.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:29.596 "is_configured": false, 00:10:29.596 "data_offset": 0, 00:10:29.596 "data_size": 63488 00:10:29.596 }, 00:10:29.596 { 00:10:29.596 "name": null, 00:10:29.596 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:29.596 "is_configured": false, 00:10:29.596 "data_offset": 2048, 00:10:29.596 "data_size": 63488 00:10:29.596 } 00:10:29.596 ] 00:10:29.596 }' 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.596 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.163 [2024-11-25 15:37:28.599061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:30.163 [2024-11-25 15:37:28.599131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.163 [2024-11-25 15:37:28.599151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:30.163 [2024-11-25 15:37:28.599161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.163 [2024-11-25 15:37:28.599614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.163 [2024-11-25 15:37:28.599644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:30.163 [2024-11-25 15:37:28.599725] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:30.163 [2024-11-25 15:37:28.599771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:30.163 pt2 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.163 [2024-11-25 15:37:28.611015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:30.163 [2024-11-25 15:37:28.611083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.163 [2024-11-25 15:37:28.611104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:30.163 [2024-11-25 15:37:28.611117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.163 [2024-11-25 15:37:28.611527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.163 [2024-11-25 15:37:28.611557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:30.163 [2024-11-25 15:37:28.611626] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:30.163 [2024-11-25 15:37:28.611653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:30.163 [2024-11-25 15:37:28.611793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:30.163 [2024-11-25 15:37:28.611813] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:30.163 [2024-11-25 15:37:28.612043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:30.163 [2024-11-25 15:37:28.612198] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:30.163 [2024-11-25 15:37:28.612212] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:30.163 [2024-11-25 15:37:28.612347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.163 pt3 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.163 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.163 "name": "raid_bdev1", 00:10:30.163 "uuid": "f68bf1b1-efb1-4988-8113-0e2002345062", 00:10:30.163 "strip_size_kb": 0, 00:10:30.163 "state": "online", 00:10:30.163 "raid_level": "raid1", 00:10:30.163 "superblock": true, 00:10:30.163 "num_base_bdevs": 3, 00:10:30.163 "num_base_bdevs_discovered": 3, 00:10:30.163 "num_base_bdevs_operational": 3, 00:10:30.163 "base_bdevs_list": [ 00:10:30.163 { 00:10:30.163 "name": "pt1", 00:10:30.163 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:30.163 "is_configured": true, 00:10:30.163 "data_offset": 2048, 00:10:30.163 "data_size": 63488 00:10:30.163 }, 00:10:30.163 { 00:10:30.163 "name": "pt2", 00:10:30.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:30.163 "is_configured": true, 00:10:30.163 "data_offset": 2048, 00:10:30.164 "data_size": 63488 00:10:30.164 }, 00:10:30.164 { 00:10:30.164 "name": "pt3", 00:10:30.164 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:30.164 "is_configured": true, 00:10:30.164 "data_offset": 2048, 00:10:30.164 "data_size": 63488 00:10:30.164 } 00:10:30.164 ] 00:10:30.164 }' 00:10:30.164 15:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.164 15:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.437 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:30.437 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:30.437 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:30.437 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:30.437 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:30.437 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:30.437 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:30.437 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.437 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.437 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:30.437 [2024-11-25 15:37:29.034589] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.437 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.437 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:30.437 "name": "raid_bdev1", 00:10:30.437 "aliases": [ 00:10:30.437 "f68bf1b1-efb1-4988-8113-0e2002345062" 00:10:30.437 ], 00:10:30.437 "product_name": "Raid Volume", 00:10:30.437 "block_size": 512, 00:10:30.437 "num_blocks": 63488, 00:10:30.437 "uuid": "f68bf1b1-efb1-4988-8113-0e2002345062", 00:10:30.437 "assigned_rate_limits": { 00:10:30.437 "rw_ios_per_sec": 0, 00:10:30.437 "rw_mbytes_per_sec": 0, 00:10:30.437 "r_mbytes_per_sec": 0, 00:10:30.437 "w_mbytes_per_sec": 0 00:10:30.437 }, 00:10:30.437 "claimed": false, 00:10:30.437 "zoned": false, 00:10:30.437 "supported_io_types": { 00:10:30.437 "read": true, 00:10:30.437 "write": true, 00:10:30.437 "unmap": false, 00:10:30.437 "flush": false, 00:10:30.437 "reset": true, 00:10:30.437 "nvme_admin": false, 00:10:30.437 "nvme_io": false, 00:10:30.437 "nvme_io_md": false, 00:10:30.437 "write_zeroes": true, 00:10:30.437 "zcopy": false, 00:10:30.437 "get_zone_info": false, 00:10:30.437 "zone_management": false, 00:10:30.437 "zone_append": false, 00:10:30.437 "compare": false, 00:10:30.437 "compare_and_write": false, 00:10:30.437 "abort": false, 00:10:30.437 "seek_hole": false, 00:10:30.437 "seek_data": false, 00:10:30.437 "copy": false, 00:10:30.437 "nvme_iov_md": false 00:10:30.437 }, 00:10:30.437 "memory_domains": [ 00:10:30.437 { 00:10:30.437 "dma_device_id": "system", 00:10:30.437 "dma_device_type": 1 00:10:30.437 }, 00:10:30.437 { 00:10:30.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.437 "dma_device_type": 2 00:10:30.437 }, 00:10:30.437 { 00:10:30.437 "dma_device_id": "system", 00:10:30.437 "dma_device_type": 1 00:10:30.437 }, 00:10:30.437 { 00:10:30.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.437 "dma_device_type": 2 00:10:30.437 }, 00:10:30.437 { 00:10:30.437 "dma_device_id": "system", 00:10:30.437 "dma_device_type": 1 00:10:30.437 }, 00:10:30.437 { 00:10:30.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.437 "dma_device_type": 2 00:10:30.437 } 00:10:30.437 ], 00:10:30.437 "driver_specific": { 00:10:30.437 "raid": { 00:10:30.437 "uuid": "f68bf1b1-efb1-4988-8113-0e2002345062", 00:10:30.437 "strip_size_kb": 0, 00:10:30.437 "state": "online", 00:10:30.437 "raid_level": "raid1", 00:10:30.437 "superblock": true, 00:10:30.437 "num_base_bdevs": 3, 00:10:30.437 "num_base_bdevs_discovered": 3, 00:10:30.437 "num_base_bdevs_operational": 3, 00:10:30.437 "base_bdevs_list": [ 00:10:30.437 { 00:10:30.437 "name": "pt1", 00:10:30.437 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:30.437 "is_configured": true, 00:10:30.437 "data_offset": 2048, 00:10:30.437 "data_size": 63488 00:10:30.437 }, 00:10:30.437 { 00:10:30.437 "name": "pt2", 00:10:30.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:30.437 "is_configured": true, 00:10:30.437 "data_offset": 2048, 00:10:30.437 "data_size": 63488 00:10:30.437 }, 00:10:30.437 { 00:10:30.437 "name": "pt3", 00:10:30.437 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:30.437 "is_configured": true, 00:10:30.437 "data_offset": 2048, 00:10:30.437 "data_size": 63488 00:10:30.437 } 00:10:30.437 ] 00:10:30.437 } 00:10:30.437 } 00:10:30.437 }' 00:10:30.437 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:30.712 pt2 00:10:30.712 pt3' 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.712 [2024-11-25 15:37:29.258137] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f68bf1b1-efb1-4988-8113-0e2002345062 '!=' f68bf1b1-efb1-4988-8113-0e2002345062 ']' 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.712 [2024-11-25 15:37:29.289862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.712 "name": "raid_bdev1", 00:10:30.712 "uuid": "f68bf1b1-efb1-4988-8113-0e2002345062", 00:10:30.712 "strip_size_kb": 0, 00:10:30.712 "state": "online", 00:10:30.712 "raid_level": "raid1", 00:10:30.712 "superblock": true, 00:10:30.712 "num_base_bdevs": 3, 00:10:30.712 "num_base_bdevs_discovered": 2, 00:10:30.712 "num_base_bdevs_operational": 2, 00:10:30.712 "base_bdevs_list": [ 00:10:30.712 { 00:10:30.712 "name": null, 00:10:30.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.712 "is_configured": false, 00:10:30.712 "data_offset": 0, 00:10:30.712 "data_size": 63488 00:10:30.712 }, 00:10:30.712 { 00:10:30.712 "name": "pt2", 00:10:30.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:30.712 "is_configured": true, 00:10:30.712 "data_offset": 2048, 00:10:30.712 "data_size": 63488 00:10:30.712 }, 00:10:30.712 { 00:10:30.712 "name": "pt3", 00:10:30.712 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:30.712 "is_configured": true, 00:10:30.712 "data_offset": 2048, 00:10:30.712 "data_size": 63488 00:10:30.712 } 00:10:30.712 ] 00:10:30.712 }' 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.712 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.279 [2024-11-25 15:37:29.733101] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:31.279 [2024-11-25 15:37:29.733135] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:31.279 [2024-11-25 15:37:29.733213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:31.279 [2024-11-25 15:37:29.733271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:31.279 [2024-11-25 15:37:29.733286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.279 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.279 [2024-11-25 15:37:29.820895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:31.279 [2024-11-25 15:37:29.820951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.279 [2024-11-25 15:37:29.820967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:31.279 [2024-11-25 15:37:29.820977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.279 [2024-11-25 15:37:29.823104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.279 [2024-11-25 15:37:29.823143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:31.280 [2024-11-25 15:37:29.823217] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:31.280 [2024-11-25 15:37:29.823265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:31.280 pt2 00:10:31.280 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.280 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:31.280 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.280 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.280 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.280 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.280 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:31.280 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.280 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.280 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.280 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.280 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.280 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.280 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.280 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.280 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.280 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.280 "name": "raid_bdev1", 00:10:31.280 "uuid": "f68bf1b1-efb1-4988-8113-0e2002345062", 00:10:31.280 "strip_size_kb": 0, 00:10:31.280 "state": "configuring", 00:10:31.280 "raid_level": "raid1", 00:10:31.280 "superblock": true, 00:10:31.280 "num_base_bdevs": 3, 00:10:31.280 "num_base_bdevs_discovered": 1, 00:10:31.280 "num_base_bdevs_operational": 2, 00:10:31.280 "base_bdevs_list": [ 00:10:31.280 { 00:10:31.280 "name": null, 00:10:31.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.280 "is_configured": false, 00:10:31.280 "data_offset": 2048, 00:10:31.280 "data_size": 63488 00:10:31.280 }, 00:10:31.280 { 00:10:31.280 "name": "pt2", 00:10:31.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:31.280 "is_configured": true, 00:10:31.280 "data_offset": 2048, 00:10:31.280 "data_size": 63488 00:10:31.280 }, 00:10:31.280 { 00:10:31.280 "name": null, 00:10:31.280 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:31.280 "is_configured": false, 00:10:31.280 "data_offset": 2048, 00:10:31.280 "data_size": 63488 00:10:31.280 } 00:10:31.280 ] 00:10:31.280 }' 00:10:31.280 15:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.280 15:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.848 [2024-11-25 15:37:30.248194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:31.848 [2024-11-25 15:37:30.248281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.848 [2024-11-25 15:37:30.248302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:31.848 [2024-11-25 15:37:30.248313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.848 [2024-11-25 15:37:30.248766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.848 [2024-11-25 15:37:30.248796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:31.848 [2024-11-25 15:37:30.248890] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:31.848 [2024-11-25 15:37:30.248922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:31.848 [2024-11-25 15:37:30.249065] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:31.848 [2024-11-25 15:37:30.249087] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:31.848 [2024-11-25 15:37:30.249337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:31.848 [2024-11-25 15:37:30.249496] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:31.848 [2024-11-25 15:37:30.249512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:31.848 [2024-11-25 15:37:30.249658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.848 pt3 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.848 "name": "raid_bdev1", 00:10:31.848 "uuid": "f68bf1b1-efb1-4988-8113-0e2002345062", 00:10:31.848 "strip_size_kb": 0, 00:10:31.848 "state": "online", 00:10:31.848 "raid_level": "raid1", 00:10:31.848 "superblock": true, 00:10:31.848 "num_base_bdevs": 3, 00:10:31.848 "num_base_bdevs_discovered": 2, 00:10:31.848 "num_base_bdevs_operational": 2, 00:10:31.848 "base_bdevs_list": [ 00:10:31.848 { 00:10:31.848 "name": null, 00:10:31.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.848 "is_configured": false, 00:10:31.848 "data_offset": 2048, 00:10:31.848 "data_size": 63488 00:10:31.848 }, 00:10:31.848 { 00:10:31.848 "name": "pt2", 00:10:31.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:31.848 "is_configured": true, 00:10:31.848 "data_offset": 2048, 00:10:31.848 "data_size": 63488 00:10:31.848 }, 00:10:31.848 { 00:10:31.848 "name": "pt3", 00:10:31.848 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:31.848 "is_configured": true, 00:10:31.848 "data_offset": 2048, 00:10:31.848 "data_size": 63488 00:10:31.848 } 00:10:31.848 ] 00:10:31.848 }' 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.848 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.108 [2024-11-25 15:37:30.671462] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.108 [2024-11-25 15:37:30.671499] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.108 [2024-11-25 15:37:30.671577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.108 [2024-11-25 15:37:30.671636] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.108 [2024-11-25 15:37:30.671645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.108 [2024-11-25 15:37:30.743332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:32.108 [2024-11-25 15:37:30.743390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.108 [2024-11-25 15:37:30.743410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:32.108 [2024-11-25 15:37:30.743419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.108 [2024-11-25 15:37:30.745557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.108 [2024-11-25 15:37:30.745594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:32.108 [2024-11-25 15:37:30.745670] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:32.108 [2024-11-25 15:37:30.745724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:32.108 [2024-11-25 15:37:30.745869] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:32.108 [2024-11-25 15:37:30.745887] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.108 [2024-11-25 15:37:30.745904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:32.108 [2024-11-25 15:37:30.745965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:32.108 pt1 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.108 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.367 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.367 "name": "raid_bdev1", 00:10:32.367 "uuid": "f68bf1b1-efb1-4988-8113-0e2002345062", 00:10:32.367 "strip_size_kb": 0, 00:10:32.367 "state": "configuring", 00:10:32.367 "raid_level": "raid1", 00:10:32.367 "superblock": true, 00:10:32.367 "num_base_bdevs": 3, 00:10:32.367 "num_base_bdevs_discovered": 1, 00:10:32.367 "num_base_bdevs_operational": 2, 00:10:32.367 "base_bdevs_list": [ 00:10:32.367 { 00:10:32.367 "name": null, 00:10:32.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.367 "is_configured": false, 00:10:32.367 "data_offset": 2048, 00:10:32.367 "data_size": 63488 00:10:32.367 }, 00:10:32.367 { 00:10:32.367 "name": "pt2", 00:10:32.367 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:32.367 "is_configured": true, 00:10:32.367 "data_offset": 2048, 00:10:32.367 "data_size": 63488 00:10:32.367 }, 00:10:32.367 { 00:10:32.367 "name": null, 00:10:32.367 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:32.367 "is_configured": false, 00:10:32.367 "data_offset": 2048, 00:10:32.367 "data_size": 63488 00:10:32.367 } 00:10:32.367 ] 00:10:32.367 }' 00:10:32.367 15:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.367 15:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.626 [2024-11-25 15:37:31.262459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:32.626 [2024-11-25 15:37:31.262537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.626 [2024-11-25 15:37:31.262564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:32.626 [2024-11-25 15:37:31.262573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.626 [2024-11-25 15:37:31.263043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.626 [2024-11-25 15:37:31.263069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:32.626 [2024-11-25 15:37:31.263154] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:32.626 [2024-11-25 15:37:31.263202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:32.626 [2024-11-25 15:37:31.263340] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:32.626 [2024-11-25 15:37:31.263356] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:32.626 [2024-11-25 15:37:31.263606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:32.626 [2024-11-25 15:37:31.263783] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:32.626 [2024-11-25 15:37:31.263802] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:32.626 [2024-11-25 15:37:31.263949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.626 pt3 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.626 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.886 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.886 "name": "raid_bdev1", 00:10:32.886 "uuid": "f68bf1b1-efb1-4988-8113-0e2002345062", 00:10:32.886 "strip_size_kb": 0, 00:10:32.886 "state": "online", 00:10:32.886 "raid_level": "raid1", 00:10:32.886 "superblock": true, 00:10:32.886 "num_base_bdevs": 3, 00:10:32.886 "num_base_bdevs_discovered": 2, 00:10:32.886 "num_base_bdevs_operational": 2, 00:10:32.886 "base_bdevs_list": [ 00:10:32.886 { 00:10:32.886 "name": null, 00:10:32.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.886 "is_configured": false, 00:10:32.886 "data_offset": 2048, 00:10:32.886 "data_size": 63488 00:10:32.886 }, 00:10:32.886 { 00:10:32.886 "name": "pt2", 00:10:32.886 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:32.886 "is_configured": true, 00:10:32.886 "data_offset": 2048, 00:10:32.886 "data_size": 63488 00:10:32.886 }, 00:10:32.886 { 00:10:32.886 "name": "pt3", 00:10:32.886 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:32.886 "is_configured": true, 00:10:32.886 "data_offset": 2048, 00:10:32.886 "data_size": 63488 00:10:32.886 } 00:10:32.886 ] 00:10:32.886 }' 00:10:32.886 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.886 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.146 [2024-11-25 15:37:31.729943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f68bf1b1-efb1-4988-8113-0e2002345062 '!=' f68bf1b1-efb1-4988-8113-0e2002345062 ']' 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68409 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68409 ']' 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68409 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68409 00:10:33.146 killing process with pid 68409 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68409' 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68409 00:10:33.146 [2024-11-25 15:37:31.807130] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.146 [2024-11-25 15:37:31.807223] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.146 [2024-11-25 15:37:31.807281] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.146 [2024-11-25 15:37:31.807292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:33.146 15:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68409 00:10:33.714 [2024-11-25 15:37:32.106992] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.654 15:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:34.654 00:10:34.654 real 0m7.490s 00:10:34.654 user 0m11.741s 00:10:34.654 sys 0m1.306s 00:10:34.654 15:37:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.654 15:37:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.654 ************************************ 00:10:34.654 END TEST raid_superblock_test 00:10:34.654 ************************************ 00:10:34.654 15:37:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:34.654 15:37:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:34.654 15:37:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.654 15:37:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:34.654 ************************************ 00:10:34.654 START TEST raid_read_error_test 00:10:34.654 ************************************ 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fytkXUoHNO 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68855 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68855 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68855 ']' 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.654 15:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.914 [2024-11-25 15:37:33.338676] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:10:34.914 [2024-11-25 15:37:33.338794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68855 ] 00:10:34.914 [2024-11-25 15:37:33.512496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.173 [2024-11-25 15:37:33.622219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.173 [2024-11-25 15:37:33.817156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.173 [2024-11-25 15:37:33.817198] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.741 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.741 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:35.741 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.741 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:35.741 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.741 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.741 BaseBdev1_malloc 00:10:35.741 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.741 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:35.741 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.741 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.741 true 00:10:35.741 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.741 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.742 [2024-11-25 15:37:34.234411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:35.742 [2024-11-25 15:37:34.234480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.742 [2024-11-25 15:37:34.234500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:35.742 [2024-11-25 15:37:34.234511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.742 [2024-11-25 15:37:34.236586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.742 [2024-11-25 15:37:34.236628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:35.742 BaseBdev1 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.742 BaseBdev2_malloc 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.742 true 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.742 [2024-11-25 15:37:34.301411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:35.742 [2024-11-25 15:37:34.301464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.742 [2024-11-25 15:37:34.301480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:35.742 [2024-11-25 15:37:34.301490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.742 [2024-11-25 15:37:34.303540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.742 [2024-11-25 15:37:34.303580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:35.742 BaseBdev2 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.742 BaseBdev3_malloc 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.742 true 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.742 [2024-11-25 15:37:34.379270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:35.742 [2024-11-25 15:37:34.379324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.742 [2024-11-25 15:37:34.379341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:35.742 [2024-11-25 15:37:34.379351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.742 [2024-11-25 15:37:34.381389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.742 [2024-11-25 15:37:34.381426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:35.742 BaseBdev3 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.742 [2024-11-25 15:37:34.391311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.742 [2024-11-25 15:37:34.393116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.742 [2024-11-25 15:37:34.393188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.742 [2024-11-25 15:37:34.393377] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:35.742 [2024-11-25 15:37:34.393389] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:35.742 [2024-11-25 15:37:34.393637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:35.742 [2024-11-25 15:37:34.393829] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:35.742 [2024-11-25 15:37:34.393850] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:35.742 [2024-11-25 15:37:34.394005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.742 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.002 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.002 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.002 "name": "raid_bdev1", 00:10:36.002 "uuid": "ffe0352a-34d3-4a34-abdf-9f06af267e58", 00:10:36.002 "strip_size_kb": 0, 00:10:36.002 "state": "online", 00:10:36.002 "raid_level": "raid1", 00:10:36.002 "superblock": true, 00:10:36.002 "num_base_bdevs": 3, 00:10:36.002 "num_base_bdevs_discovered": 3, 00:10:36.002 "num_base_bdevs_operational": 3, 00:10:36.002 "base_bdevs_list": [ 00:10:36.002 { 00:10:36.002 "name": "BaseBdev1", 00:10:36.002 "uuid": "294e2928-e7ef-5e89-b577-a340b4157604", 00:10:36.002 "is_configured": true, 00:10:36.002 "data_offset": 2048, 00:10:36.002 "data_size": 63488 00:10:36.002 }, 00:10:36.002 { 00:10:36.002 "name": "BaseBdev2", 00:10:36.002 "uuid": "ace05ea7-01e2-5e4d-b9b3-4cd9239a37c1", 00:10:36.002 "is_configured": true, 00:10:36.002 "data_offset": 2048, 00:10:36.002 "data_size": 63488 00:10:36.002 }, 00:10:36.002 { 00:10:36.002 "name": "BaseBdev3", 00:10:36.002 "uuid": "f3615b67-d626-5b05-a63e-ce9cd6aa510a", 00:10:36.002 "is_configured": true, 00:10:36.002 "data_offset": 2048, 00:10:36.002 "data_size": 63488 00:10:36.002 } 00:10:36.002 ] 00:10:36.002 }' 00:10:36.002 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.002 15:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.262 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:36.262 15:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:36.262 [2024-11-25 15:37:34.895841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:37.201 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:37.201 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.201 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.202 "name": "raid_bdev1", 00:10:37.202 "uuid": "ffe0352a-34d3-4a34-abdf-9f06af267e58", 00:10:37.202 "strip_size_kb": 0, 00:10:37.202 "state": "online", 00:10:37.202 "raid_level": "raid1", 00:10:37.202 "superblock": true, 00:10:37.202 "num_base_bdevs": 3, 00:10:37.202 "num_base_bdevs_discovered": 3, 00:10:37.202 "num_base_bdevs_operational": 3, 00:10:37.202 "base_bdevs_list": [ 00:10:37.202 { 00:10:37.202 "name": "BaseBdev1", 00:10:37.202 "uuid": "294e2928-e7ef-5e89-b577-a340b4157604", 00:10:37.202 "is_configured": true, 00:10:37.202 "data_offset": 2048, 00:10:37.202 "data_size": 63488 00:10:37.202 }, 00:10:37.202 { 00:10:37.202 "name": "BaseBdev2", 00:10:37.202 "uuid": "ace05ea7-01e2-5e4d-b9b3-4cd9239a37c1", 00:10:37.202 "is_configured": true, 00:10:37.202 "data_offset": 2048, 00:10:37.202 "data_size": 63488 00:10:37.202 }, 00:10:37.202 { 00:10:37.202 "name": "BaseBdev3", 00:10:37.202 "uuid": "f3615b67-d626-5b05-a63e-ce9cd6aa510a", 00:10:37.202 "is_configured": true, 00:10:37.202 "data_offset": 2048, 00:10:37.202 "data_size": 63488 00:10:37.202 } 00:10:37.202 ] 00:10:37.202 }' 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.202 15:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.771 15:37:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:37.771 15:37:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.771 15:37:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.771 [2024-11-25 15:37:36.226710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:37.771 [2024-11-25 15:37:36.226747] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.771 [2024-11-25 15:37:36.229526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.771 [2024-11-25 15:37:36.229574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.771 [2024-11-25 15:37:36.229675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:37.771 [2024-11-25 15:37:36.229685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:37.771 { 00:10:37.771 "results": [ 00:10:37.771 { 00:10:37.771 "job": "raid_bdev1", 00:10:37.771 "core_mask": "0x1", 00:10:37.771 "workload": "randrw", 00:10:37.771 "percentage": 50, 00:10:37.771 "status": "finished", 00:10:37.771 "queue_depth": 1, 00:10:37.771 "io_size": 131072, 00:10:37.771 "runtime": 1.331623, 00:10:37.771 "iops": 13542.872119210917, 00:10:37.771 "mibps": 1692.8590149013646, 00:10:37.771 "io_failed": 0, 00:10:37.771 "io_timeout": 0, 00:10:37.771 "avg_latency_us": 71.24749069322236, 00:10:37.771 "min_latency_us": 23.699563318777294, 00:10:37.771 "max_latency_us": 1502.46288209607 00:10:37.771 } 00:10:37.771 ], 00:10:37.771 "core_count": 1 00:10:37.771 } 00:10:37.771 15:37:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.771 15:37:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68855 00:10:37.771 15:37:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68855 ']' 00:10:37.771 15:37:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68855 00:10:37.771 15:37:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:37.771 15:37:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.771 15:37:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68855 00:10:37.771 15:37:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.771 15:37:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.771 killing process with pid 68855 00:10:37.771 15:37:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68855' 00:10:37.771 15:37:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68855 00:10:37.771 [2024-11-25 15:37:36.270040] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:37.771 15:37:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68855 00:10:38.029 [2024-11-25 15:37:36.504452] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:39.411 15:37:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fytkXUoHNO 00:10:39.411 15:37:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:39.411 15:37:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:39.411 15:37:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:39.411 15:37:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:39.411 15:37:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:39.411 15:37:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:39.411 15:37:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:39.411 00:10:39.411 real 0m4.437s 00:10:39.411 user 0m5.234s 00:10:39.411 sys 0m0.526s 00:10:39.411 15:37:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.411 15:37:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.411 ************************************ 00:10:39.411 END TEST raid_read_error_test 00:10:39.411 ************************************ 00:10:39.411 15:37:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:39.411 15:37:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:39.411 15:37:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.411 15:37:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:39.411 ************************************ 00:10:39.411 START TEST raid_write_error_test 00:10:39.411 ************************************ 00:10:39.411 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:39.411 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:39.411 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2l5gZEe0zv 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68995 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68995 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 68995 ']' 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.412 15:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.412 [2024-11-25 15:37:37.843573] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:10:39.412 [2024-11-25 15:37:37.843694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68995 ] 00:10:39.412 [2024-11-25 15:37:38.018193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.672 [2024-11-25 15:37:38.133732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.672 [2024-11-25 15:37:38.337129] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.672 [2024-11-25 15:37:38.337176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.268 BaseBdev1_malloc 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.268 true 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.268 [2024-11-25 15:37:38.738674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:40.268 [2024-11-25 15:37:38.738736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.268 [2024-11-25 15:37:38.738755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:40.268 [2024-11-25 15:37:38.738766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.268 [2024-11-25 15:37:38.740860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.268 [2024-11-25 15:37:38.740900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:40.268 BaseBdev1 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.268 BaseBdev2_malloc 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.268 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.268 true 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.269 [2024-11-25 15:37:38.800979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:40.269 [2024-11-25 15:37:38.801061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.269 [2024-11-25 15:37:38.801078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:40.269 [2024-11-25 15:37:38.801089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.269 [2024-11-25 15:37:38.803150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.269 [2024-11-25 15:37:38.803189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:40.269 BaseBdev2 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.269 BaseBdev3_malloc 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.269 true 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.269 [2024-11-25 15:37:38.865474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:40.269 [2024-11-25 15:37:38.865538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.269 [2024-11-25 15:37:38.865555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:40.269 [2024-11-25 15:37:38.865565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.269 [2024-11-25 15:37:38.867627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.269 [2024-11-25 15:37:38.867668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:40.269 BaseBdev3 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.269 [2024-11-25 15:37:38.873514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.269 [2024-11-25 15:37:38.875310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.269 [2024-11-25 15:37:38.875390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.269 [2024-11-25 15:37:38.875588] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:40.269 [2024-11-25 15:37:38.875603] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:40.269 [2024-11-25 15:37:38.875837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:40.269 [2024-11-25 15:37:38.876029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:40.269 [2024-11-25 15:37:38.876051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:40.269 [2024-11-25 15:37:38.876201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.269 "name": "raid_bdev1", 00:10:40.269 "uuid": "bc030b78-cf2b-47e8-ba8e-ea995085fdf8", 00:10:40.269 "strip_size_kb": 0, 00:10:40.269 "state": "online", 00:10:40.269 "raid_level": "raid1", 00:10:40.269 "superblock": true, 00:10:40.269 "num_base_bdevs": 3, 00:10:40.269 "num_base_bdevs_discovered": 3, 00:10:40.269 "num_base_bdevs_operational": 3, 00:10:40.269 "base_bdevs_list": [ 00:10:40.269 { 00:10:40.269 "name": "BaseBdev1", 00:10:40.269 "uuid": "02245c46-9562-5633-8170-c7386f75504f", 00:10:40.269 "is_configured": true, 00:10:40.269 "data_offset": 2048, 00:10:40.269 "data_size": 63488 00:10:40.269 }, 00:10:40.269 { 00:10:40.269 "name": "BaseBdev2", 00:10:40.269 "uuid": "7e5457a1-1310-5846-98dd-4bd5d363edb1", 00:10:40.269 "is_configured": true, 00:10:40.269 "data_offset": 2048, 00:10:40.269 "data_size": 63488 00:10:40.269 }, 00:10:40.269 { 00:10:40.269 "name": "BaseBdev3", 00:10:40.269 "uuid": "e8b2f701-d9f6-51ed-8c95-dfbb5f01ea26", 00:10:40.269 "is_configured": true, 00:10:40.269 "data_offset": 2048, 00:10:40.269 "data_size": 63488 00:10:40.269 } 00:10:40.269 ] 00:10:40.269 }' 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.269 15:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.839 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:40.839 15:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:40.839 [2024-11-25 15:37:39.353993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:41.777 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:41.777 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.777 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.777 [2024-11-25 15:37:40.286257] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:41.777 [2024-11-25 15:37:40.286312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:41.777 [2024-11-25 15:37:40.286525] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:41.777 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.777 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:41.777 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.778 "name": "raid_bdev1", 00:10:41.778 "uuid": "bc030b78-cf2b-47e8-ba8e-ea995085fdf8", 00:10:41.778 "strip_size_kb": 0, 00:10:41.778 "state": "online", 00:10:41.778 "raid_level": "raid1", 00:10:41.778 "superblock": true, 00:10:41.778 "num_base_bdevs": 3, 00:10:41.778 "num_base_bdevs_discovered": 2, 00:10:41.778 "num_base_bdevs_operational": 2, 00:10:41.778 "base_bdevs_list": [ 00:10:41.778 { 00:10:41.778 "name": null, 00:10:41.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.778 "is_configured": false, 00:10:41.778 "data_offset": 0, 00:10:41.778 "data_size": 63488 00:10:41.778 }, 00:10:41.778 { 00:10:41.778 "name": "BaseBdev2", 00:10:41.778 "uuid": "7e5457a1-1310-5846-98dd-4bd5d363edb1", 00:10:41.778 "is_configured": true, 00:10:41.778 "data_offset": 2048, 00:10:41.778 "data_size": 63488 00:10:41.778 }, 00:10:41.778 { 00:10:41.778 "name": "BaseBdev3", 00:10:41.778 "uuid": "e8b2f701-d9f6-51ed-8c95-dfbb5f01ea26", 00:10:41.778 "is_configured": true, 00:10:41.778 "data_offset": 2048, 00:10:41.778 "data_size": 63488 00:10:41.778 } 00:10:41.778 ] 00:10:41.778 }' 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.778 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.037 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:42.037 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.037 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.296 [2024-11-25 15:37:40.719867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:42.296 [2024-11-25 15:37:40.719906] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.296 [2024-11-25 15:37:40.722533] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.296 [2024-11-25 15:37:40.722622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.296 [2024-11-25 15:37:40.722706] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.296 [2024-11-25 15:37:40.722719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:42.296 { 00:10:42.296 "results": [ 00:10:42.296 { 00:10:42.296 "job": "raid_bdev1", 00:10:42.296 "core_mask": "0x1", 00:10:42.296 "workload": "randrw", 00:10:42.297 "percentage": 50, 00:10:42.297 "status": "finished", 00:10:42.297 "queue_depth": 1, 00:10:42.297 "io_size": 131072, 00:10:42.297 "runtime": 1.366705, 00:10:42.297 "iops": 15038.35868018336, 00:10:42.297 "mibps": 1879.79483502292, 00:10:42.297 "io_failed": 0, 00:10:42.297 "io_timeout": 0, 00:10:42.297 "avg_latency_us": 63.92520111493621, 00:10:42.297 "min_latency_us": 23.252401746724892, 00:10:42.297 "max_latency_us": 1552.5449781659388 00:10:42.297 } 00:10:42.297 ], 00:10:42.297 "core_count": 1 00:10:42.297 } 00:10:42.297 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.297 15:37:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68995 00:10:42.297 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 68995 ']' 00:10:42.297 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 68995 00:10:42.297 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:42.297 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.297 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68995 00:10:42.297 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.297 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.297 killing process with pid 68995 00:10:42.297 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68995' 00:10:42.297 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 68995 00:10:42.297 [2024-11-25 15:37:40.756960] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.297 15:37:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 68995 00:10:42.555 [2024-11-25 15:37:40.989734] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:43.492 15:37:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2l5gZEe0zv 00:10:43.492 15:37:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:43.492 15:37:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:43.492 15:37:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:43.492 15:37:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:43.492 15:37:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:43.492 15:37:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:43.492 15:37:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:43.492 00:10:43.492 real 0m4.423s 00:10:43.492 user 0m5.224s 00:10:43.492 sys 0m0.530s 00:10:43.492 15:37:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.492 15:37:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.492 ************************************ 00:10:43.492 END TEST raid_write_error_test 00:10:43.492 ************************************ 00:10:43.751 15:37:42 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:43.751 15:37:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:43.751 15:37:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:43.751 15:37:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:43.751 15:37:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.751 15:37:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:43.751 ************************************ 00:10:43.751 START TEST raid_state_function_test 00:10:43.751 ************************************ 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69133 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69133' 00:10:43.752 Process raid pid: 69133 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69133 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69133 ']' 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.752 15:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.752 [2024-11-25 15:37:42.329820] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:10:43.752 [2024-11-25 15:37:42.329929] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.011 [2024-11-25 15:37:42.505611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.011 [2024-11-25 15:37:42.622729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.270 [2024-11-25 15:37:42.820825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.270 [2024-11-25 15:37:42.820866] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.528 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.528 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:44.528 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:44.528 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.528 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.528 [2024-11-25 15:37:43.166418] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:44.528 [2024-11-25 15:37:43.166472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:44.528 [2024-11-25 15:37:43.166483] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:44.528 [2024-11-25 15:37:43.166493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:44.528 [2024-11-25 15:37:43.166500] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:44.528 [2024-11-25 15:37:43.166508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:44.528 [2024-11-25 15:37:43.166515] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:44.528 [2024-11-25 15:37:43.166523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:44.528 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.528 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.528 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.528 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.528 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.529 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.529 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.529 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.529 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.529 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.529 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.529 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.529 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.529 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.529 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.529 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.787 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.787 "name": "Existed_Raid", 00:10:44.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.787 "strip_size_kb": 64, 00:10:44.787 "state": "configuring", 00:10:44.787 "raid_level": "raid0", 00:10:44.787 "superblock": false, 00:10:44.787 "num_base_bdevs": 4, 00:10:44.787 "num_base_bdevs_discovered": 0, 00:10:44.788 "num_base_bdevs_operational": 4, 00:10:44.788 "base_bdevs_list": [ 00:10:44.788 { 00:10:44.788 "name": "BaseBdev1", 00:10:44.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.788 "is_configured": false, 00:10:44.788 "data_offset": 0, 00:10:44.788 "data_size": 0 00:10:44.788 }, 00:10:44.788 { 00:10:44.788 "name": "BaseBdev2", 00:10:44.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.788 "is_configured": false, 00:10:44.788 "data_offset": 0, 00:10:44.788 "data_size": 0 00:10:44.788 }, 00:10:44.788 { 00:10:44.788 "name": "BaseBdev3", 00:10:44.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.788 "is_configured": false, 00:10:44.788 "data_offset": 0, 00:10:44.788 "data_size": 0 00:10:44.788 }, 00:10:44.788 { 00:10:44.788 "name": "BaseBdev4", 00:10:44.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.788 "is_configured": false, 00:10:44.788 "data_offset": 0, 00:10:44.788 "data_size": 0 00:10:44.788 } 00:10:44.788 ] 00:10:44.788 }' 00:10:44.788 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.788 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.051 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:45.051 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.051 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.051 [2024-11-25 15:37:43.649530] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:45.052 [2024-11-25 15:37:43.649630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.052 [2024-11-25 15:37:43.661511] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.052 [2024-11-25 15:37:43.661602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.052 [2024-11-25 15:37:43.661634] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.052 [2024-11-25 15:37:43.661660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.052 [2024-11-25 15:37:43.661681] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:45.052 [2024-11-25 15:37:43.661705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:45.052 [2024-11-25 15:37:43.661725] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:45.052 [2024-11-25 15:37:43.661758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.052 [2024-11-25 15:37:43.710018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.052 BaseBdev1 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.052 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.314 [ 00:10:45.314 { 00:10:45.314 "name": "BaseBdev1", 00:10:45.314 "aliases": [ 00:10:45.314 "9a269146-3e2b-4331-97cb-1fb72f1dbcac" 00:10:45.314 ], 00:10:45.314 "product_name": "Malloc disk", 00:10:45.314 "block_size": 512, 00:10:45.314 "num_blocks": 65536, 00:10:45.314 "uuid": "9a269146-3e2b-4331-97cb-1fb72f1dbcac", 00:10:45.314 "assigned_rate_limits": { 00:10:45.314 "rw_ios_per_sec": 0, 00:10:45.314 "rw_mbytes_per_sec": 0, 00:10:45.314 "r_mbytes_per_sec": 0, 00:10:45.314 "w_mbytes_per_sec": 0 00:10:45.314 }, 00:10:45.314 "claimed": true, 00:10:45.314 "claim_type": "exclusive_write", 00:10:45.314 "zoned": false, 00:10:45.314 "supported_io_types": { 00:10:45.314 "read": true, 00:10:45.314 "write": true, 00:10:45.314 "unmap": true, 00:10:45.314 "flush": true, 00:10:45.314 "reset": true, 00:10:45.314 "nvme_admin": false, 00:10:45.314 "nvme_io": false, 00:10:45.314 "nvme_io_md": false, 00:10:45.314 "write_zeroes": true, 00:10:45.314 "zcopy": true, 00:10:45.314 "get_zone_info": false, 00:10:45.314 "zone_management": false, 00:10:45.314 "zone_append": false, 00:10:45.314 "compare": false, 00:10:45.314 "compare_and_write": false, 00:10:45.314 "abort": true, 00:10:45.314 "seek_hole": false, 00:10:45.314 "seek_data": false, 00:10:45.314 "copy": true, 00:10:45.314 "nvme_iov_md": false 00:10:45.314 }, 00:10:45.314 "memory_domains": [ 00:10:45.314 { 00:10:45.314 "dma_device_id": "system", 00:10:45.314 "dma_device_type": 1 00:10:45.314 }, 00:10:45.314 { 00:10:45.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.314 "dma_device_type": 2 00:10:45.314 } 00:10:45.314 ], 00:10:45.314 "driver_specific": {} 00:10:45.314 } 00:10:45.314 ] 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.314 "name": "Existed_Raid", 00:10:45.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.314 "strip_size_kb": 64, 00:10:45.314 "state": "configuring", 00:10:45.314 "raid_level": "raid0", 00:10:45.314 "superblock": false, 00:10:45.314 "num_base_bdevs": 4, 00:10:45.314 "num_base_bdevs_discovered": 1, 00:10:45.314 "num_base_bdevs_operational": 4, 00:10:45.314 "base_bdevs_list": [ 00:10:45.314 { 00:10:45.314 "name": "BaseBdev1", 00:10:45.314 "uuid": "9a269146-3e2b-4331-97cb-1fb72f1dbcac", 00:10:45.314 "is_configured": true, 00:10:45.314 "data_offset": 0, 00:10:45.314 "data_size": 65536 00:10:45.314 }, 00:10:45.314 { 00:10:45.314 "name": "BaseBdev2", 00:10:45.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.314 "is_configured": false, 00:10:45.314 "data_offset": 0, 00:10:45.314 "data_size": 0 00:10:45.314 }, 00:10:45.314 { 00:10:45.314 "name": "BaseBdev3", 00:10:45.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.314 "is_configured": false, 00:10:45.314 "data_offset": 0, 00:10:45.314 "data_size": 0 00:10:45.314 }, 00:10:45.314 { 00:10:45.314 "name": "BaseBdev4", 00:10:45.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.314 "is_configured": false, 00:10:45.314 "data_offset": 0, 00:10:45.314 "data_size": 0 00:10:45.314 } 00:10:45.314 ] 00:10:45.314 }' 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.314 15:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.574 [2024-11-25 15:37:44.197223] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:45.574 [2024-11-25 15:37:44.197330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.574 [2024-11-25 15:37:44.209247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.574 [2024-11-25 15:37:44.211066] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.574 [2024-11-25 15:37:44.211108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.574 [2024-11-25 15:37:44.211118] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:45.574 [2024-11-25 15:37:44.211129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:45.574 [2024-11-25 15:37:44.211136] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:45.574 [2024-11-25 15:37:44.211145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.574 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.834 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.834 "name": "Existed_Raid", 00:10:45.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.834 "strip_size_kb": 64, 00:10:45.834 "state": "configuring", 00:10:45.834 "raid_level": "raid0", 00:10:45.834 "superblock": false, 00:10:45.834 "num_base_bdevs": 4, 00:10:45.834 "num_base_bdevs_discovered": 1, 00:10:45.834 "num_base_bdevs_operational": 4, 00:10:45.834 "base_bdevs_list": [ 00:10:45.834 { 00:10:45.834 "name": "BaseBdev1", 00:10:45.834 "uuid": "9a269146-3e2b-4331-97cb-1fb72f1dbcac", 00:10:45.834 "is_configured": true, 00:10:45.834 "data_offset": 0, 00:10:45.834 "data_size": 65536 00:10:45.834 }, 00:10:45.834 { 00:10:45.834 "name": "BaseBdev2", 00:10:45.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.834 "is_configured": false, 00:10:45.834 "data_offset": 0, 00:10:45.834 "data_size": 0 00:10:45.834 }, 00:10:45.834 { 00:10:45.834 "name": "BaseBdev3", 00:10:45.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.834 "is_configured": false, 00:10:45.834 "data_offset": 0, 00:10:45.834 "data_size": 0 00:10:45.834 }, 00:10:45.834 { 00:10:45.834 "name": "BaseBdev4", 00:10:45.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.834 "is_configured": false, 00:10:45.834 "data_offset": 0, 00:10:45.834 "data_size": 0 00:10:45.834 } 00:10:45.834 ] 00:10:45.834 }' 00:10:45.834 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.834 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.094 [2024-11-25 15:37:44.644751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:46.094 BaseBdev2 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.094 [ 00:10:46.094 { 00:10:46.094 "name": "BaseBdev2", 00:10:46.094 "aliases": [ 00:10:46.094 "65f8470e-9100-45a8-813c-bc321df1cd59" 00:10:46.094 ], 00:10:46.094 "product_name": "Malloc disk", 00:10:46.094 "block_size": 512, 00:10:46.094 "num_blocks": 65536, 00:10:46.094 "uuid": "65f8470e-9100-45a8-813c-bc321df1cd59", 00:10:46.094 "assigned_rate_limits": { 00:10:46.094 "rw_ios_per_sec": 0, 00:10:46.094 "rw_mbytes_per_sec": 0, 00:10:46.094 "r_mbytes_per_sec": 0, 00:10:46.094 "w_mbytes_per_sec": 0 00:10:46.094 }, 00:10:46.094 "claimed": true, 00:10:46.094 "claim_type": "exclusive_write", 00:10:46.094 "zoned": false, 00:10:46.094 "supported_io_types": { 00:10:46.094 "read": true, 00:10:46.094 "write": true, 00:10:46.094 "unmap": true, 00:10:46.094 "flush": true, 00:10:46.094 "reset": true, 00:10:46.094 "nvme_admin": false, 00:10:46.094 "nvme_io": false, 00:10:46.094 "nvme_io_md": false, 00:10:46.094 "write_zeroes": true, 00:10:46.094 "zcopy": true, 00:10:46.094 "get_zone_info": false, 00:10:46.094 "zone_management": false, 00:10:46.094 "zone_append": false, 00:10:46.094 "compare": false, 00:10:46.094 "compare_and_write": false, 00:10:46.094 "abort": true, 00:10:46.094 "seek_hole": false, 00:10:46.094 "seek_data": false, 00:10:46.094 "copy": true, 00:10:46.094 "nvme_iov_md": false 00:10:46.094 }, 00:10:46.094 "memory_domains": [ 00:10:46.094 { 00:10:46.094 "dma_device_id": "system", 00:10:46.094 "dma_device_type": 1 00:10:46.094 }, 00:10:46.094 { 00:10:46.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.094 "dma_device_type": 2 00:10:46.094 } 00:10:46.094 ], 00:10:46.094 "driver_specific": {} 00:10:46.094 } 00:10:46.094 ] 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.094 "name": "Existed_Raid", 00:10:46.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.094 "strip_size_kb": 64, 00:10:46.094 "state": "configuring", 00:10:46.094 "raid_level": "raid0", 00:10:46.094 "superblock": false, 00:10:46.094 "num_base_bdevs": 4, 00:10:46.094 "num_base_bdevs_discovered": 2, 00:10:46.094 "num_base_bdevs_operational": 4, 00:10:46.094 "base_bdevs_list": [ 00:10:46.094 { 00:10:46.094 "name": "BaseBdev1", 00:10:46.094 "uuid": "9a269146-3e2b-4331-97cb-1fb72f1dbcac", 00:10:46.094 "is_configured": true, 00:10:46.094 "data_offset": 0, 00:10:46.094 "data_size": 65536 00:10:46.094 }, 00:10:46.094 { 00:10:46.094 "name": "BaseBdev2", 00:10:46.094 "uuid": "65f8470e-9100-45a8-813c-bc321df1cd59", 00:10:46.094 "is_configured": true, 00:10:46.094 "data_offset": 0, 00:10:46.094 "data_size": 65536 00:10:46.094 }, 00:10:46.094 { 00:10:46.094 "name": "BaseBdev3", 00:10:46.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.094 "is_configured": false, 00:10:46.094 "data_offset": 0, 00:10:46.094 "data_size": 0 00:10:46.094 }, 00:10:46.094 { 00:10:46.094 "name": "BaseBdev4", 00:10:46.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.094 "is_configured": false, 00:10:46.094 "data_offset": 0, 00:10:46.094 "data_size": 0 00:10:46.094 } 00:10:46.094 ] 00:10:46.094 }' 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.094 15:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.664 [2024-11-25 15:37:45.190770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.664 BaseBdev3 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.664 [ 00:10:46.664 { 00:10:46.664 "name": "BaseBdev3", 00:10:46.664 "aliases": [ 00:10:46.664 "9df42444-025f-4d6a-9d1b-7efa8acaaa8a" 00:10:46.664 ], 00:10:46.664 "product_name": "Malloc disk", 00:10:46.664 "block_size": 512, 00:10:46.664 "num_blocks": 65536, 00:10:46.664 "uuid": "9df42444-025f-4d6a-9d1b-7efa8acaaa8a", 00:10:46.664 "assigned_rate_limits": { 00:10:46.664 "rw_ios_per_sec": 0, 00:10:46.664 "rw_mbytes_per_sec": 0, 00:10:46.664 "r_mbytes_per_sec": 0, 00:10:46.664 "w_mbytes_per_sec": 0 00:10:46.664 }, 00:10:46.664 "claimed": true, 00:10:46.664 "claim_type": "exclusive_write", 00:10:46.664 "zoned": false, 00:10:46.664 "supported_io_types": { 00:10:46.664 "read": true, 00:10:46.664 "write": true, 00:10:46.664 "unmap": true, 00:10:46.664 "flush": true, 00:10:46.664 "reset": true, 00:10:46.664 "nvme_admin": false, 00:10:46.664 "nvme_io": false, 00:10:46.664 "nvme_io_md": false, 00:10:46.664 "write_zeroes": true, 00:10:46.664 "zcopy": true, 00:10:46.664 "get_zone_info": false, 00:10:46.664 "zone_management": false, 00:10:46.664 "zone_append": false, 00:10:46.664 "compare": false, 00:10:46.664 "compare_and_write": false, 00:10:46.664 "abort": true, 00:10:46.664 "seek_hole": false, 00:10:46.664 "seek_data": false, 00:10:46.664 "copy": true, 00:10:46.664 "nvme_iov_md": false 00:10:46.664 }, 00:10:46.664 "memory_domains": [ 00:10:46.664 { 00:10:46.664 "dma_device_id": "system", 00:10:46.664 "dma_device_type": 1 00:10:46.664 }, 00:10:46.664 { 00:10:46.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.664 "dma_device_type": 2 00:10:46.664 } 00:10:46.664 ], 00:10:46.664 "driver_specific": {} 00:10:46.664 } 00:10:46.664 ] 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.664 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.665 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.665 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.665 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.665 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.665 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.665 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.665 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.665 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.665 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.665 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.665 "name": "Existed_Raid", 00:10:46.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.665 "strip_size_kb": 64, 00:10:46.665 "state": "configuring", 00:10:46.665 "raid_level": "raid0", 00:10:46.665 "superblock": false, 00:10:46.665 "num_base_bdevs": 4, 00:10:46.665 "num_base_bdevs_discovered": 3, 00:10:46.665 "num_base_bdevs_operational": 4, 00:10:46.665 "base_bdevs_list": [ 00:10:46.665 { 00:10:46.665 "name": "BaseBdev1", 00:10:46.665 "uuid": "9a269146-3e2b-4331-97cb-1fb72f1dbcac", 00:10:46.665 "is_configured": true, 00:10:46.665 "data_offset": 0, 00:10:46.665 "data_size": 65536 00:10:46.665 }, 00:10:46.665 { 00:10:46.665 "name": "BaseBdev2", 00:10:46.665 "uuid": "65f8470e-9100-45a8-813c-bc321df1cd59", 00:10:46.665 "is_configured": true, 00:10:46.665 "data_offset": 0, 00:10:46.665 "data_size": 65536 00:10:46.665 }, 00:10:46.665 { 00:10:46.665 "name": "BaseBdev3", 00:10:46.665 "uuid": "9df42444-025f-4d6a-9d1b-7efa8acaaa8a", 00:10:46.665 "is_configured": true, 00:10:46.665 "data_offset": 0, 00:10:46.665 "data_size": 65536 00:10:46.665 }, 00:10:46.665 { 00:10:46.665 "name": "BaseBdev4", 00:10:46.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.665 "is_configured": false, 00:10:46.665 "data_offset": 0, 00:10:46.665 "data_size": 0 00:10:46.665 } 00:10:46.665 ] 00:10:46.665 }' 00:10:46.665 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.665 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.234 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:47.234 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.234 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.234 [2024-11-25 15:37:45.661909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:47.234 [2024-11-25 15:37:45.662031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:47.234 [2024-11-25 15:37:45.662076] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:47.234 [2024-11-25 15:37:45.662386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:47.234 [2024-11-25 15:37:45.662614] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:47.234 [2024-11-25 15:37:45.662663] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:47.234 [2024-11-25 15:37:45.662941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.234 BaseBdev4 00:10:47.234 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.234 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:47.234 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:47.234 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.234 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:47.234 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.234 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.234 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.234 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.234 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.234 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.234 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:47.234 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.234 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.234 [ 00:10:47.234 { 00:10:47.234 "name": "BaseBdev4", 00:10:47.234 "aliases": [ 00:10:47.234 "b6a8345d-49c4-4663-8408-80bb8fec348e" 00:10:47.234 ], 00:10:47.234 "product_name": "Malloc disk", 00:10:47.234 "block_size": 512, 00:10:47.234 "num_blocks": 65536, 00:10:47.234 "uuid": "b6a8345d-49c4-4663-8408-80bb8fec348e", 00:10:47.234 "assigned_rate_limits": { 00:10:47.234 "rw_ios_per_sec": 0, 00:10:47.234 "rw_mbytes_per_sec": 0, 00:10:47.234 "r_mbytes_per_sec": 0, 00:10:47.234 "w_mbytes_per_sec": 0 00:10:47.234 }, 00:10:47.234 "claimed": true, 00:10:47.234 "claim_type": "exclusive_write", 00:10:47.234 "zoned": false, 00:10:47.234 "supported_io_types": { 00:10:47.235 "read": true, 00:10:47.235 "write": true, 00:10:47.235 "unmap": true, 00:10:47.235 "flush": true, 00:10:47.235 "reset": true, 00:10:47.235 "nvme_admin": false, 00:10:47.235 "nvme_io": false, 00:10:47.235 "nvme_io_md": false, 00:10:47.235 "write_zeroes": true, 00:10:47.235 "zcopy": true, 00:10:47.235 "get_zone_info": false, 00:10:47.235 "zone_management": false, 00:10:47.235 "zone_append": false, 00:10:47.235 "compare": false, 00:10:47.235 "compare_and_write": false, 00:10:47.235 "abort": true, 00:10:47.235 "seek_hole": false, 00:10:47.235 "seek_data": false, 00:10:47.235 "copy": true, 00:10:47.235 "nvme_iov_md": false 00:10:47.235 }, 00:10:47.235 "memory_domains": [ 00:10:47.235 { 00:10:47.235 "dma_device_id": "system", 00:10:47.235 "dma_device_type": 1 00:10:47.235 }, 00:10:47.235 { 00:10:47.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.235 "dma_device_type": 2 00:10:47.235 } 00:10:47.235 ], 00:10:47.235 "driver_specific": {} 00:10:47.235 } 00:10:47.235 ] 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.235 "name": "Existed_Raid", 00:10:47.235 "uuid": "d19efc49-a889-49c4-9864-dfe607c28131", 00:10:47.235 "strip_size_kb": 64, 00:10:47.235 "state": "online", 00:10:47.235 "raid_level": "raid0", 00:10:47.235 "superblock": false, 00:10:47.235 "num_base_bdevs": 4, 00:10:47.235 "num_base_bdevs_discovered": 4, 00:10:47.235 "num_base_bdevs_operational": 4, 00:10:47.235 "base_bdevs_list": [ 00:10:47.235 { 00:10:47.235 "name": "BaseBdev1", 00:10:47.235 "uuid": "9a269146-3e2b-4331-97cb-1fb72f1dbcac", 00:10:47.235 "is_configured": true, 00:10:47.235 "data_offset": 0, 00:10:47.235 "data_size": 65536 00:10:47.235 }, 00:10:47.235 { 00:10:47.235 "name": "BaseBdev2", 00:10:47.235 "uuid": "65f8470e-9100-45a8-813c-bc321df1cd59", 00:10:47.235 "is_configured": true, 00:10:47.235 "data_offset": 0, 00:10:47.235 "data_size": 65536 00:10:47.235 }, 00:10:47.235 { 00:10:47.235 "name": "BaseBdev3", 00:10:47.235 "uuid": "9df42444-025f-4d6a-9d1b-7efa8acaaa8a", 00:10:47.235 "is_configured": true, 00:10:47.235 "data_offset": 0, 00:10:47.235 "data_size": 65536 00:10:47.235 }, 00:10:47.235 { 00:10:47.235 "name": "BaseBdev4", 00:10:47.235 "uuid": "b6a8345d-49c4-4663-8408-80bb8fec348e", 00:10:47.235 "is_configured": true, 00:10:47.235 "data_offset": 0, 00:10:47.235 "data_size": 65536 00:10:47.235 } 00:10:47.235 ] 00:10:47.235 }' 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.235 15:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.494 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:47.494 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:47.494 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:47.494 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:47.494 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:47.494 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:47.494 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:47.494 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:47.494 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.494 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.494 [2024-11-25 15:37:46.129452] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.494 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.494 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:47.494 "name": "Existed_Raid", 00:10:47.494 "aliases": [ 00:10:47.494 "d19efc49-a889-49c4-9864-dfe607c28131" 00:10:47.494 ], 00:10:47.494 "product_name": "Raid Volume", 00:10:47.494 "block_size": 512, 00:10:47.494 "num_blocks": 262144, 00:10:47.494 "uuid": "d19efc49-a889-49c4-9864-dfe607c28131", 00:10:47.494 "assigned_rate_limits": { 00:10:47.494 "rw_ios_per_sec": 0, 00:10:47.494 "rw_mbytes_per_sec": 0, 00:10:47.494 "r_mbytes_per_sec": 0, 00:10:47.494 "w_mbytes_per_sec": 0 00:10:47.494 }, 00:10:47.494 "claimed": false, 00:10:47.494 "zoned": false, 00:10:47.494 "supported_io_types": { 00:10:47.494 "read": true, 00:10:47.494 "write": true, 00:10:47.494 "unmap": true, 00:10:47.494 "flush": true, 00:10:47.494 "reset": true, 00:10:47.494 "nvme_admin": false, 00:10:47.494 "nvme_io": false, 00:10:47.494 "nvme_io_md": false, 00:10:47.494 "write_zeroes": true, 00:10:47.494 "zcopy": false, 00:10:47.494 "get_zone_info": false, 00:10:47.494 "zone_management": false, 00:10:47.494 "zone_append": false, 00:10:47.494 "compare": false, 00:10:47.494 "compare_and_write": false, 00:10:47.494 "abort": false, 00:10:47.494 "seek_hole": false, 00:10:47.494 "seek_data": false, 00:10:47.494 "copy": false, 00:10:47.494 "nvme_iov_md": false 00:10:47.494 }, 00:10:47.494 "memory_domains": [ 00:10:47.494 { 00:10:47.494 "dma_device_id": "system", 00:10:47.494 "dma_device_type": 1 00:10:47.494 }, 00:10:47.494 { 00:10:47.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.494 "dma_device_type": 2 00:10:47.494 }, 00:10:47.494 { 00:10:47.494 "dma_device_id": "system", 00:10:47.494 "dma_device_type": 1 00:10:47.494 }, 00:10:47.494 { 00:10:47.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.494 "dma_device_type": 2 00:10:47.494 }, 00:10:47.494 { 00:10:47.494 "dma_device_id": "system", 00:10:47.494 "dma_device_type": 1 00:10:47.494 }, 00:10:47.494 { 00:10:47.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.494 "dma_device_type": 2 00:10:47.494 }, 00:10:47.494 { 00:10:47.494 "dma_device_id": "system", 00:10:47.494 "dma_device_type": 1 00:10:47.494 }, 00:10:47.494 { 00:10:47.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.494 "dma_device_type": 2 00:10:47.494 } 00:10:47.494 ], 00:10:47.494 "driver_specific": { 00:10:47.494 "raid": { 00:10:47.494 "uuid": "d19efc49-a889-49c4-9864-dfe607c28131", 00:10:47.494 "strip_size_kb": 64, 00:10:47.494 "state": "online", 00:10:47.494 "raid_level": "raid0", 00:10:47.494 "superblock": false, 00:10:47.494 "num_base_bdevs": 4, 00:10:47.494 "num_base_bdevs_discovered": 4, 00:10:47.494 "num_base_bdevs_operational": 4, 00:10:47.494 "base_bdevs_list": [ 00:10:47.494 { 00:10:47.494 "name": "BaseBdev1", 00:10:47.494 "uuid": "9a269146-3e2b-4331-97cb-1fb72f1dbcac", 00:10:47.494 "is_configured": true, 00:10:47.494 "data_offset": 0, 00:10:47.494 "data_size": 65536 00:10:47.494 }, 00:10:47.494 { 00:10:47.494 "name": "BaseBdev2", 00:10:47.494 "uuid": "65f8470e-9100-45a8-813c-bc321df1cd59", 00:10:47.494 "is_configured": true, 00:10:47.494 "data_offset": 0, 00:10:47.494 "data_size": 65536 00:10:47.494 }, 00:10:47.494 { 00:10:47.494 "name": "BaseBdev3", 00:10:47.494 "uuid": "9df42444-025f-4d6a-9d1b-7efa8acaaa8a", 00:10:47.494 "is_configured": true, 00:10:47.494 "data_offset": 0, 00:10:47.494 "data_size": 65536 00:10:47.494 }, 00:10:47.494 { 00:10:47.494 "name": "BaseBdev4", 00:10:47.494 "uuid": "b6a8345d-49c4-4663-8408-80bb8fec348e", 00:10:47.494 "is_configured": true, 00:10:47.494 "data_offset": 0, 00:10:47.494 "data_size": 65536 00:10:47.494 } 00:10:47.494 ] 00:10:47.494 } 00:10:47.494 } 00:10:47.494 }' 00:10:47.494 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:47.754 BaseBdev2 00:10:47.754 BaseBdev3 00:10:47.754 BaseBdev4' 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.754 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.014 [2024-11-25 15:37:46.448665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:48.014 [2024-11-25 15:37:46.448739] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:48.014 [2024-11-25 15:37:46.448813] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.014 "name": "Existed_Raid", 00:10:48.014 "uuid": "d19efc49-a889-49c4-9864-dfe607c28131", 00:10:48.014 "strip_size_kb": 64, 00:10:48.014 "state": "offline", 00:10:48.014 "raid_level": "raid0", 00:10:48.014 "superblock": false, 00:10:48.014 "num_base_bdevs": 4, 00:10:48.014 "num_base_bdevs_discovered": 3, 00:10:48.014 "num_base_bdevs_operational": 3, 00:10:48.014 "base_bdevs_list": [ 00:10:48.014 { 00:10:48.014 "name": null, 00:10:48.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.014 "is_configured": false, 00:10:48.014 "data_offset": 0, 00:10:48.014 "data_size": 65536 00:10:48.014 }, 00:10:48.014 { 00:10:48.014 "name": "BaseBdev2", 00:10:48.014 "uuid": "65f8470e-9100-45a8-813c-bc321df1cd59", 00:10:48.014 "is_configured": true, 00:10:48.014 "data_offset": 0, 00:10:48.014 "data_size": 65536 00:10:48.014 }, 00:10:48.014 { 00:10:48.014 "name": "BaseBdev3", 00:10:48.014 "uuid": "9df42444-025f-4d6a-9d1b-7efa8acaaa8a", 00:10:48.014 "is_configured": true, 00:10:48.014 "data_offset": 0, 00:10:48.014 "data_size": 65536 00:10:48.014 }, 00:10:48.014 { 00:10:48.014 "name": "BaseBdev4", 00:10:48.014 "uuid": "b6a8345d-49c4-4663-8408-80bb8fec348e", 00:10:48.014 "is_configured": true, 00:10:48.014 "data_offset": 0, 00:10:48.014 "data_size": 65536 00:10:48.014 } 00:10:48.014 ] 00:10:48.014 }' 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.014 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.584 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:48.584 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.584 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.584 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.584 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.584 15:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.584 15:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.584 [2024-11-25 15:37:47.016244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.584 [2024-11-25 15:37:47.164072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.584 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.844 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.844 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.844 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.844 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:48.844 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.844 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.844 [2024-11-25 15:37:47.293377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:48.845 [2024-11-25 15:37:47.293469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.845 BaseBdev2 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.845 [ 00:10:48.845 { 00:10:48.845 "name": "BaseBdev2", 00:10:48.845 "aliases": [ 00:10:48.845 "9ebb4ef1-7405-4513-bc57-deda11dea430" 00:10:48.845 ], 00:10:48.845 "product_name": "Malloc disk", 00:10:48.845 "block_size": 512, 00:10:48.845 "num_blocks": 65536, 00:10:48.845 "uuid": "9ebb4ef1-7405-4513-bc57-deda11dea430", 00:10:48.845 "assigned_rate_limits": { 00:10:48.845 "rw_ios_per_sec": 0, 00:10:48.845 "rw_mbytes_per_sec": 0, 00:10:48.845 "r_mbytes_per_sec": 0, 00:10:48.845 "w_mbytes_per_sec": 0 00:10:48.845 }, 00:10:48.845 "claimed": false, 00:10:48.845 "zoned": false, 00:10:48.845 "supported_io_types": { 00:10:48.845 "read": true, 00:10:48.845 "write": true, 00:10:48.845 "unmap": true, 00:10:48.845 "flush": true, 00:10:48.845 "reset": true, 00:10:48.845 "nvme_admin": false, 00:10:48.845 "nvme_io": false, 00:10:48.845 "nvme_io_md": false, 00:10:48.845 "write_zeroes": true, 00:10:48.845 "zcopy": true, 00:10:48.845 "get_zone_info": false, 00:10:48.845 "zone_management": false, 00:10:48.845 "zone_append": false, 00:10:48.845 "compare": false, 00:10:48.845 "compare_and_write": false, 00:10:48.845 "abort": true, 00:10:48.845 "seek_hole": false, 00:10:48.845 "seek_data": false, 00:10:48.845 "copy": true, 00:10:48.845 "nvme_iov_md": false 00:10:48.845 }, 00:10:48.845 "memory_domains": [ 00:10:48.845 { 00:10:48.845 "dma_device_id": "system", 00:10:48.845 "dma_device_type": 1 00:10:48.845 }, 00:10:48.845 { 00:10:48.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.845 "dma_device_type": 2 00:10:48.845 } 00:10:48.845 ], 00:10:48.845 "driver_specific": {} 00:10:48.845 } 00:10:48.845 ] 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.845 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.105 BaseBdev3 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.105 [ 00:10:49.105 { 00:10:49.105 "name": "BaseBdev3", 00:10:49.105 "aliases": [ 00:10:49.105 "d09cae27-fe06-4270-b6ce-c6199c4ca9f3" 00:10:49.105 ], 00:10:49.105 "product_name": "Malloc disk", 00:10:49.105 "block_size": 512, 00:10:49.105 "num_blocks": 65536, 00:10:49.105 "uuid": "d09cae27-fe06-4270-b6ce-c6199c4ca9f3", 00:10:49.105 "assigned_rate_limits": { 00:10:49.105 "rw_ios_per_sec": 0, 00:10:49.105 "rw_mbytes_per_sec": 0, 00:10:49.105 "r_mbytes_per_sec": 0, 00:10:49.105 "w_mbytes_per_sec": 0 00:10:49.105 }, 00:10:49.105 "claimed": false, 00:10:49.105 "zoned": false, 00:10:49.105 "supported_io_types": { 00:10:49.105 "read": true, 00:10:49.105 "write": true, 00:10:49.105 "unmap": true, 00:10:49.105 "flush": true, 00:10:49.105 "reset": true, 00:10:49.105 "nvme_admin": false, 00:10:49.105 "nvme_io": false, 00:10:49.105 "nvme_io_md": false, 00:10:49.105 "write_zeroes": true, 00:10:49.105 "zcopy": true, 00:10:49.105 "get_zone_info": false, 00:10:49.105 "zone_management": false, 00:10:49.105 "zone_append": false, 00:10:49.105 "compare": false, 00:10:49.105 "compare_and_write": false, 00:10:49.105 "abort": true, 00:10:49.105 "seek_hole": false, 00:10:49.105 "seek_data": false, 00:10:49.105 "copy": true, 00:10:49.105 "nvme_iov_md": false 00:10:49.105 }, 00:10:49.105 "memory_domains": [ 00:10:49.105 { 00:10:49.105 "dma_device_id": "system", 00:10:49.105 "dma_device_type": 1 00:10:49.105 }, 00:10:49.105 { 00:10:49.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.105 "dma_device_type": 2 00:10:49.105 } 00:10:49.105 ], 00:10:49.105 "driver_specific": {} 00:10:49.105 } 00:10:49.105 ] 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.105 BaseBdev4 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.105 [ 00:10:49.105 { 00:10:49.105 "name": "BaseBdev4", 00:10:49.105 "aliases": [ 00:10:49.105 "7189bb0a-0421-43d7-aef4-3b0e7434bc04" 00:10:49.105 ], 00:10:49.105 "product_name": "Malloc disk", 00:10:49.105 "block_size": 512, 00:10:49.105 "num_blocks": 65536, 00:10:49.105 "uuid": "7189bb0a-0421-43d7-aef4-3b0e7434bc04", 00:10:49.105 "assigned_rate_limits": { 00:10:49.105 "rw_ios_per_sec": 0, 00:10:49.105 "rw_mbytes_per_sec": 0, 00:10:49.105 "r_mbytes_per_sec": 0, 00:10:49.105 "w_mbytes_per_sec": 0 00:10:49.105 }, 00:10:49.105 "claimed": false, 00:10:49.105 "zoned": false, 00:10:49.105 "supported_io_types": { 00:10:49.105 "read": true, 00:10:49.105 "write": true, 00:10:49.105 "unmap": true, 00:10:49.105 "flush": true, 00:10:49.105 "reset": true, 00:10:49.105 "nvme_admin": false, 00:10:49.105 "nvme_io": false, 00:10:49.105 "nvme_io_md": false, 00:10:49.105 "write_zeroes": true, 00:10:49.105 "zcopy": true, 00:10:49.105 "get_zone_info": false, 00:10:49.105 "zone_management": false, 00:10:49.105 "zone_append": false, 00:10:49.105 "compare": false, 00:10:49.105 "compare_and_write": false, 00:10:49.105 "abort": true, 00:10:49.105 "seek_hole": false, 00:10:49.105 "seek_data": false, 00:10:49.105 "copy": true, 00:10:49.105 "nvme_iov_md": false 00:10:49.105 }, 00:10:49.105 "memory_domains": [ 00:10:49.105 { 00:10:49.105 "dma_device_id": "system", 00:10:49.105 "dma_device_type": 1 00:10:49.105 }, 00:10:49.105 { 00:10:49.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.105 "dma_device_type": 2 00:10:49.105 } 00:10:49.105 ], 00:10:49.105 "driver_specific": {} 00:10:49.105 } 00:10:49.105 ] 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.105 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.106 [2024-11-25 15:37:47.682405] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.106 [2024-11-25 15:37:47.682499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.106 [2024-11-25 15:37:47.682540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.106 [2024-11-25 15:37:47.684384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.106 [2024-11-25 15:37:47.684487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.106 "name": "Existed_Raid", 00:10:49.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.106 "strip_size_kb": 64, 00:10:49.106 "state": "configuring", 00:10:49.106 "raid_level": "raid0", 00:10:49.106 "superblock": false, 00:10:49.106 "num_base_bdevs": 4, 00:10:49.106 "num_base_bdevs_discovered": 3, 00:10:49.106 "num_base_bdevs_operational": 4, 00:10:49.106 "base_bdevs_list": [ 00:10:49.106 { 00:10:49.106 "name": "BaseBdev1", 00:10:49.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.106 "is_configured": false, 00:10:49.106 "data_offset": 0, 00:10:49.106 "data_size": 0 00:10:49.106 }, 00:10:49.106 { 00:10:49.106 "name": "BaseBdev2", 00:10:49.106 "uuid": "9ebb4ef1-7405-4513-bc57-deda11dea430", 00:10:49.106 "is_configured": true, 00:10:49.106 "data_offset": 0, 00:10:49.106 "data_size": 65536 00:10:49.106 }, 00:10:49.106 { 00:10:49.106 "name": "BaseBdev3", 00:10:49.106 "uuid": "d09cae27-fe06-4270-b6ce-c6199c4ca9f3", 00:10:49.106 "is_configured": true, 00:10:49.106 "data_offset": 0, 00:10:49.106 "data_size": 65536 00:10:49.106 }, 00:10:49.106 { 00:10:49.106 "name": "BaseBdev4", 00:10:49.106 "uuid": "7189bb0a-0421-43d7-aef4-3b0e7434bc04", 00:10:49.106 "is_configured": true, 00:10:49.106 "data_offset": 0, 00:10:49.106 "data_size": 65536 00:10:49.106 } 00:10:49.106 ] 00:10:49.106 }' 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.106 15:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.674 [2024-11-25 15:37:48.117687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.674 "name": "Existed_Raid", 00:10:49.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.674 "strip_size_kb": 64, 00:10:49.674 "state": "configuring", 00:10:49.674 "raid_level": "raid0", 00:10:49.674 "superblock": false, 00:10:49.674 "num_base_bdevs": 4, 00:10:49.674 "num_base_bdevs_discovered": 2, 00:10:49.674 "num_base_bdevs_operational": 4, 00:10:49.674 "base_bdevs_list": [ 00:10:49.674 { 00:10:49.674 "name": "BaseBdev1", 00:10:49.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.674 "is_configured": false, 00:10:49.674 "data_offset": 0, 00:10:49.674 "data_size": 0 00:10:49.674 }, 00:10:49.674 { 00:10:49.674 "name": null, 00:10:49.674 "uuid": "9ebb4ef1-7405-4513-bc57-deda11dea430", 00:10:49.674 "is_configured": false, 00:10:49.674 "data_offset": 0, 00:10:49.674 "data_size": 65536 00:10:49.674 }, 00:10:49.674 { 00:10:49.674 "name": "BaseBdev3", 00:10:49.674 "uuid": "d09cae27-fe06-4270-b6ce-c6199c4ca9f3", 00:10:49.674 "is_configured": true, 00:10:49.674 "data_offset": 0, 00:10:49.674 "data_size": 65536 00:10:49.674 }, 00:10:49.674 { 00:10:49.674 "name": "BaseBdev4", 00:10:49.674 "uuid": "7189bb0a-0421-43d7-aef4-3b0e7434bc04", 00:10:49.674 "is_configured": true, 00:10:49.674 "data_offset": 0, 00:10:49.674 "data_size": 65536 00:10:49.674 } 00:10:49.674 ] 00:10:49.674 }' 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.674 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.933 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.933 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:49.933 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.933 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.933 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.201 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:50.201 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:50.201 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.201 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.201 [2024-11-25 15:37:48.666105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.201 BaseBdev1 00:10:50.201 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.201 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:50.201 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:50.201 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.201 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:50.201 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.201 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.201 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.201 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.201 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.201 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.201 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:50.201 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.201 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.201 [ 00:10:50.201 { 00:10:50.201 "name": "BaseBdev1", 00:10:50.201 "aliases": [ 00:10:50.201 "0ad5e3ae-9eed-41e7-acc6-64cb7ab55b7b" 00:10:50.201 ], 00:10:50.201 "product_name": "Malloc disk", 00:10:50.201 "block_size": 512, 00:10:50.201 "num_blocks": 65536, 00:10:50.201 "uuid": "0ad5e3ae-9eed-41e7-acc6-64cb7ab55b7b", 00:10:50.201 "assigned_rate_limits": { 00:10:50.202 "rw_ios_per_sec": 0, 00:10:50.202 "rw_mbytes_per_sec": 0, 00:10:50.202 "r_mbytes_per_sec": 0, 00:10:50.202 "w_mbytes_per_sec": 0 00:10:50.202 }, 00:10:50.202 "claimed": true, 00:10:50.202 "claim_type": "exclusive_write", 00:10:50.202 "zoned": false, 00:10:50.202 "supported_io_types": { 00:10:50.202 "read": true, 00:10:50.202 "write": true, 00:10:50.202 "unmap": true, 00:10:50.202 "flush": true, 00:10:50.202 "reset": true, 00:10:50.202 "nvme_admin": false, 00:10:50.202 "nvme_io": false, 00:10:50.202 "nvme_io_md": false, 00:10:50.202 "write_zeroes": true, 00:10:50.202 "zcopy": true, 00:10:50.202 "get_zone_info": false, 00:10:50.202 "zone_management": false, 00:10:50.202 "zone_append": false, 00:10:50.202 "compare": false, 00:10:50.202 "compare_and_write": false, 00:10:50.202 "abort": true, 00:10:50.202 "seek_hole": false, 00:10:50.202 "seek_data": false, 00:10:50.202 "copy": true, 00:10:50.202 "nvme_iov_md": false 00:10:50.202 }, 00:10:50.202 "memory_domains": [ 00:10:50.202 { 00:10:50.202 "dma_device_id": "system", 00:10:50.202 "dma_device_type": 1 00:10:50.202 }, 00:10:50.202 { 00:10:50.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.202 "dma_device_type": 2 00:10:50.202 } 00:10:50.202 ], 00:10:50.202 "driver_specific": {} 00:10:50.202 } 00:10:50.202 ] 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.202 "name": "Existed_Raid", 00:10:50.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.202 "strip_size_kb": 64, 00:10:50.202 "state": "configuring", 00:10:50.202 "raid_level": "raid0", 00:10:50.202 "superblock": false, 00:10:50.202 "num_base_bdevs": 4, 00:10:50.202 "num_base_bdevs_discovered": 3, 00:10:50.202 "num_base_bdevs_operational": 4, 00:10:50.202 "base_bdevs_list": [ 00:10:50.202 { 00:10:50.202 "name": "BaseBdev1", 00:10:50.202 "uuid": "0ad5e3ae-9eed-41e7-acc6-64cb7ab55b7b", 00:10:50.202 "is_configured": true, 00:10:50.202 "data_offset": 0, 00:10:50.202 "data_size": 65536 00:10:50.202 }, 00:10:50.202 { 00:10:50.202 "name": null, 00:10:50.202 "uuid": "9ebb4ef1-7405-4513-bc57-deda11dea430", 00:10:50.202 "is_configured": false, 00:10:50.202 "data_offset": 0, 00:10:50.202 "data_size": 65536 00:10:50.202 }, 00:10:50.202 { 00:10:50.202 "name": "BaseBdev3", 00:10:50.202 "uuid": "d09cae27-fe06-4270-b6ce-c6199c4ca9f3", 00:10:50.202 "is_configured": true, 00:10:50.202 "data_offset": 0, 00:10:50.202 "data_size": 65536 00:10:50.202 }, 00:10:50.202 { 00:10:50.202 "name": "BaseBdev4", 00:10:50.202 "uuid": "7189bb0a-0421-43d7-aef4-3b0e7434bc04", 00:10:50.202 "is_configured": true, 00:10:50.202 "data_offset": 0, 00:10:50.202 "data_size": 65536 00:10:50.202 } 00:10:50.202 ] 00:10:50.202 }' 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.202 15:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.478 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.478 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:50.478 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.478 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.739 [2024-11-25 15:37:49.181283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.739 "name": "Existed_Raid", 00:10:50.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.739 "strip_size_kb": 64, 00:10:50.739 "state": "configuring", 00:10:50.739 "raid_level": "raid0", 00:10:50.739 "superblock": false, 00:10:50.739 "num_base_bdevs": 4, 00:10:50.739 "num_base_bdevs_discovered": 2, 00:10:50.739 "num_base_bdevs_operational": 4, 00:10:50.739 "base_bdevs_list": [ 00:10:50.739 { 00:10:50.739 "name": "BaseBdev1", 00:10:50.739 "uuid": "0ad5e3ae-9eed-41e7-acc6-64cb7ab55b7b", 00:10:50.739 "is_configured": true, 00:10:50.739 "data_offset": 0, 00:10:50.739 "data_size": 65536 00:10:50.739 }, 00:10:50.739 { 00:10:50.739 "name": null, 00:10:50.739 "uuid": "9ebb4ef1-7405-4513-bc57-deda11dea430", 00:10:50.739 "is_configured": false, 00:10:50.739 "data_offset": 0, 00:10:50.739 "data_size": 65536 00:10:50.739 }, 00:10:50.739 { 00:10:50.739 "name": null, 00:10:50.739 "uuid": "d09cae27-fe06-4270-b6ce-c6199c4ca9f3", 00:10:50.739 "is_configured": false, 00:10:50.739 "data_offset": 0, 00:10:50.739 "data_size": 65536 00:10:50.739 }, 00:10:50.739 { 00:10:50.739 "name": "BaseBdev4", 00:10:50.739 "uuid": "7189bb0a-0421-43d7-aef4-3b0e7434bc04", 00:10:50.739 "is_configured": true, 00:10:50.739 "data_offset": 0, 00:10:50.739 "data_size": 65536 00:10:50.739 } 00:10:50.739 ] 00:10:50.739 }' 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.739 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.997 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.998 [2024-11-25 15:37:49.640498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.998 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.257 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.257 "name": "Existed_Raid", 00:10:51.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.257 "strip_size_kb": 64, 00:10:51.257 "state": "configuring", 00:10:51.257 "raid_level": "raid0", 00:10:51.257 "superblock": false, 00:10:51.257 "num_base_bdevs": 4, 00:10:51.257 "num_base_bdevs_discovered": 3, 00:10:51.257 "num_base_bdevs_operational": 4, 00:10:51.257 "base_bdevs_list": [ 00:10:51.257 { 00:10:51.257 "name": "BaseBdev1", 00:10:51.257 "uuid": "0ad5e3ae-9eed-41e7-acc6-64cb7ab55b7b", 00:10:51.257 "is_configured": true, 00:10:51.257 "data_offset": 0, 00:10:51.257 "data_size": 65536 00:10:51.257 }, 00:10:51.257 { 00:10:51.257 "name": null, 00:10:51.257 "uuid": "9ebb4ef1-7405-4513-bc57-deda11dea430", 00:10:51.257 "is_configured": false, 00:10:51.257 "data_offset": 0, 00:10:51.257 "data_size": 65536 00:10:51.257 }, 00:10:51.257 { 00:10:51.257 "name": "BaseBdev3", 00:10:51.257 "uuid": "d09cae27-fe06-4270-b6ce-c6199c4ca9f3", 00:10:51.257 "is_configured": true, 00:10:51.257 "data_offset": 0, 00:10:51.257 "data_size": 65536 00:10:51.257 }, 00:10:51.257 { 00:10:51.257 "name": "BaseBdev4", 00:10:51.257 "uuid": "7189bb0a-0421-43d7-aef4-3b0e7434bc04", 00:10:51.257 "is_configured": true, 00:10:51.257 "data_offset": 0, 00:10:51.257 "data_size": 65536 00:10:51.257 } 00:10:51.257 ] 00:10:51.257 }' 00:10:51.258 15:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.258 15:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.517 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.517 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.517 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.517 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:51.517 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.517 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:51.517 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:51.517 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.517 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.517 [2024-11-25 15:37:50.103733] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:51.776 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.776 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:51.776 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.776 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.776 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.776 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.776 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.776 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.776 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.776 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.776 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.776 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.776 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.776 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.776 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.776 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.776 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.776 "name": "Existed_Raid", 00:10:51.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.776 "strip_size_kb": 64, 00:10:51.776 "state": "configuring", 00:10:51.776 "raid_level": "raid0", 00:10:51.776 "superblock": false, 00:10:51.776 "num_base_bdevs": 4, 00:10:51.776 "num_base_bdevs_discovered": 2, 00:10:51.776 "num_base_bdevs_operational": 4, 00:10:51.776 "base_bdevs_list": [ 00:10:51.776 { 00:10:51.776 "name": null, 00:10:51.776 "uuid": "0ad5e3ae-9eed-41e7-acc6-64cb7ab55b7b", 00:10:51.776 "is_configured": false, 00:10:51.776 "data_offset": 0, 00:10:51.776 "data_size": 65536 00:10:51.776 }, 00:10:51.776 { 00:10:51.776 "name": null, 00:10:51.776 "uuid": "9ebb4ef1-7405-4513-bc57-deda11dea430", 00:10:51.776 "is_configured": false, 00:10:51.776 "data_offset": 0, 00:10:51.776 "data_size": 65536 00:10:51.776 }, 00:10:51.776 { 00:10:51.776 "name": "BaseBdev3", 00:10:51.776 "uuid": "d09cae27-fe06-4270-b6ce-c6199c4ca9f3", 00:10:51.776 "is_configured": true, 00:10:51.776 "data_offset": 0, 00:10:51.776 "data_size": 65536 00:10:51.776 }, 00:10:51.776 { 00:10:51.776 "name": "BaseBdev4", 00:10:51.776 "uuid": "7189bb0a-0421-43d7-aef4-3b0e7434bc04", 00:10:51.776 "is_configured": true, 00:10:51.776 "data_offset": 0, 00:10:51.776 "data_size": 65536 00:10:51.776 } 00:10:51.776 ] 00:10:51.776 }' 00:10:51.776 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.776 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.036 [2024-11-25 15:37:50.678521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.036 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.295 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.295 "name": "Existed_Raid", 00:10:52.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.295 "strip_size_kb": 64, 00:10:52.295 "state": "configuring", 00:10:52.295 "raid_level": "raid0", 00:10:52.295 "superblock": false, 00:10:52.295 "num_base_bdevs": 4, 00:10:52.295 "num_base_bdevs_discovered": 3, 00:10:52.295 "num_base_bdevs_operational": 4, 00:10:52.295 "base_bdevs_list": [ 00:10:52.295 { 00:10:52.295 "name": null, 00:10:52.295 "uuid": "0ad5e3ae-9eed-41e7-acc6-64cb7ab55b7b", 00:10:52.295 "is_configured": false, 00:10:52.295 "data_offset": 0, 00:10:52.295 "data_size": 65536 00:10:52.295 }, 00:10:52.295 { 00:10:52.295 "name": "BaseBdev2", 00:10:52.295 "uuid": "9ebb4ef1-7405-4513-bc57-deda11dea430", 00:10:52.295 "is_configured": true, 00:10:52.295 "data_offset": 0, 00:10:52.295 "data_size": 65536 00:10:52.295 }, 00:10:52.295 { 00:10:52.295 "name": "BaseBdev3", 00:10:52.295 "uuid": "d09cae27-fe06-4270-b6ce-c6199c4ca9f3", 00:10:52.295 "is_configured": true, 00:10:52.295 "data_offset": 0, 00:10:52.295 "data_size": 65536 00:10:52.295 }, 00:10:52.295 { 00:10:52.295 "name": "BaseBdev4", 00:10:52.295 "uuid": "7189bb0a-0421-43d7-aef4-3b0e7434bc04", 00:10:52.295 "is_configured": true, 00:10:52.295 "data_offset": 0, 00:10:52.295 "data_size": 65536 00:10:52.295 } 00:10:52.295 ] 00:10:52.295 }' 00:10:52.295 15:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.295 15:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.554 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.554 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.554 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:52.554 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.555 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.555 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:52.555 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.555 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.555 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.555 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:52.555 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.555 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0ad5e3ae-9eed-41e7-acc6-64cb7ab55b7b 00:10:52.555 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.555 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.814 [2024-11-25 15:37:51.261672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:52.814 [2024-11-25 15:37:51.261781] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:52.814 [2024-11-25 15:37:51.261793] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:52.814 [2024-11-25 15:37:51.262110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:52.814 [2024-11-25 15:37:51.262267] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:52.814 [2024-11-25 15:37:51.262280] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:52.814 [2024-11-25 15:37:51.262516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.814 NewBaseBdev 00:10:52.814 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.814 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:52.814 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:52.814 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.814 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:52.814 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.814 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.814 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.814 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.814 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.814 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.814 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:52.814 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.814 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.814 [ 00:10:52.814 { 00:10:52.814 "name": "NewBaseBdev", 00:10:52.814 "aliases": [ 00:10:52.814 "0ad5e3ae-9eed-41e7-acc6-64cb7ab55b7b" 00:10:52.814 ], 00:10:52.814 "product_name": "Malloc disk", 00:10:52.814 "block_size": 512, 00:10:52.814 "num_blocks": 65536, 00:10:52.814 "uuid": "0ad5e3ae-9eed-41e7-acc6-64cb7ab55b7b", 00:10:52.814 "assigned_rate_limits": { 00:10:52.814 "rw_ios_per_sec": 0, 00:10:52.814 "rw_mbytes_per_sec": 0, 00:10:52.814 "r_mbytes_per_sec": 0, 00:10:52.814 "w_mbytes_per_sec": 0 00:10:52.814 }, 00:10:52.814 "claimed": true, 00:10:52.814 "claim_type": "exclusive_write", 00:10:52.814 "zoned": false, 00:10:52.814 "supported_io_types": { 00:10:52.814 "read": true, 00:10:52.814 "write": true, 00:10:52.814 "unmap": true, 00:10:52.814 "flush": true, 00:10:52.814 "reset": true, 00:10:52.814 "nvme_admin": false, 00:10:52.814 "nvme_io": false, 00:10:52.814 "nvme_io_md": false, 00:10:52.814 "write_zeroes": true, 00:10:52.814 "zcopy": true, 00:10:52.814 "get_zone_info": false, 00:10:52.814 "zone_management": false, 00:10:52.814 "zone_append": false, 00:10:52.814 "compare": false, 00:10:52.814 "compare_and_write": false, 00:10:52.814 "abort": true, 00:10:52.814 "seek_hole": false, 00:10:52.814 "seek_data": false, 00:10:52.814 "copy": true, 00:10:52.814 "nvme_iov_md": false 00:10:52.814 }, 00:10:52.814 "memory_domains": [ 00:10:52.814 { 00:10:52.814 "dma_device_id": "system", 00:10:52.814 "dma_device_type": 1 00:10:52.814 }, 00:10:52.814 { 00:10:52.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.814 "dma_device_type": 2 00:10:52.814 } 00:10:52.815 ], 00:10:52.815 "driver_specific": {} 00:10:52.815 } 00:10:52.815 ] 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.815 "name": "Existed_Raid", 00:10:52.815 "uuid": "bd728c5f-e859-4ab0-beb4-59d99dde17e7", 00:10:52.815 "strip_size_kb": 64, 00:10:52.815 "state": "online", 00:10:52.815 "raid_level": "raid0", 00:10:52.815 "superblock": false, 00:10:52.815 "num_base_bdevs": 4, 00:10:52.815 "num_base_bdevs_discovered": 4, 00:10:52.815 "num_base_bdevs_operational": 4, 00:10:52.815 "base_bdevs_list": [ 00:10:52.815 { 00:10:52.815 "name": "NewBaseBdev", 00:10:52.815 "uuid": "0ad5e3ae-9eed-41e7-acc6-64cb7ab55b7b", 00:10:52.815 "is_configured": true, 00:10:52.815 "data_offset": 0, 00:10:52.815 "data_size": 65536 00:10:52.815 }, 00:10:52.815 { 00:10:52.815 "name": "BaseBdev2", 00:10:52.815 "uuid": "9ebb4ef1-7405-4513-bc57-deda11dea430", 00:10:52.815 "is_configured": true, 00:10:52.815 "data_offset": 0, 00:10:52.815 "data_size": 65536 00:10:52.815 }, 00:10:52.815 { 00:10:52.815 "name": "BaseBdev3", 00:10:52.815 "uuid": "d09cae27-fe06-4270-b6ce-c6199c4ca9f3", 00:10:52.815 "is_configured": true, 00:10:52.815 "data_offset": 0, 00:10:52.815 "data_size": 65536 00:10:52.815 }, 00:10:52.815 { 00:10:52.815 "name": "BaseBdev4", 00:10:52.815 "uuid": "7189bb0a-0421-43d7-aef4-3b0e7434bc04", 00:10:52.815 "is_configured": true, 00:10:52.815 "data_offset": 0, 00:10:52.815 "data_size": 65536 00:10:52.815 } 00:10:52.815 ] 00:10:52.815 }' 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.815 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.384 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:53.384 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:53.384 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:53.384 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:53.384 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:53.384 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:53.384 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:53.384 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.384 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.384 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:53.384 [2024-11-25 15:37:51.781216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.384 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.384 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:53.384 "name": "Existed_Raid", 00:10:53.384 "aliases": [ 00:10:53.384 "bd728c5f-e859-4ab0-beb4-59d99dde17e7" 00:10:53.384 ], 00:10:53.384 "product_name": "Raid Volume", 00:10:53.384 "block_size": 512, 00:10:53.384 "num_blocks": 262144, 00:10:53.384 "uuid": "bd728c5f-e859-4ab0-beb4-59d99dde17e7", 00:10:53.384 "assigned_rate_limits": { 00:10:53.384 "rw_ios_per_sec": 0, 00:10:53.384 "rw_mbytes_per_sec": 0, 00:10:53.384 "r_mbytes_per_sec": 0, 00:10:53.384 "w_mbytes_per_sec": 0 00:10:53.384 }, 00:10:53.384 "claimed": false, 00:10:53.384 "zoned": false, 00:10:53.384 "supported_io_types": { 00:10:53.384 "read": true, 00:10:53.384 "write": true, 00:10:53.384 "unmap": true, 00:10:53.384 "flush": true, 00:10:53.384 "reset": true, 00:10:53.384 "nvme_admin": false, 00:10:53.384 "nvme_io": false, 00:10:53.384 "nvme_io_md": false, 00:10:53.384 "write_zeroes": true, 00:10:53.384 "zcopy": false, 00:10:53.384 "get_zone_info": false, 00:10:53.384 "zone_management": false, 00:10:53.384 "zone_append": false, 00:10:53.384 "compare": false, 00:10:53.384 "compare_and_write": false, 00:10:53.384 "abort": false, 00:10:53.384 "seek_hole": false, 00:10:53.384 "seek_data": false, 00:10:53.384 "copy": false, 00:10:53.384 "nvme_iov_md": false 00:10:53.384 }, 00:10:53.384 "memory_domains": [ 00:10:53.384 { 00:10:53.384 "dma_device_id": "system", 00:10:53.384 "dma_device_type": 1 00:10:53.384 }, 00:10:53.384 { 00:10:53.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.384 "dma_device_type": 2 00:10:53.384 }, 00:10:53.384 { 00:10:53.384 "dma_device_id": "system", 00:10:53.384 "dma_device_type": 1 00:10:53.384 }, 00:10:53.384 { 00:10:53.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.384 "dma_device_type": 2 00:10:53.384 }, 00:10:53.384 { 00:10:53.384 "dma_device_id": "system", 00:10:53.384 "dma_device_type": 1 00:10:53.384 }, 00:10:53.384 { 00:10:53.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.384 "dma_device_type": 2 00:10:53.384 }, 00:10:53.384 { 00:10:53.385 "dma_device_id": "system", 00:10:53.385 "dma_device_type": 1 00:10:53.385 }, 00:10:53.385 { 00:10:53.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.385 "dma_device_type": 2 00:10:53.385 } 00:10:53.385 ], 00:10:53.385 "driver_specific": { 00:10:53.385 "raid": { 00:10:53.385 "uuid": "bd728c5f-e859-4ab0-beb4-59d99dde17e7", 00:10:53.385 "strip_size_kb": 64, 00:10:53.385 "state": "online", 00:10:53.385 "raid_level": "raid0", 00:10:53.385 "superblock": false, 00:10:53.385 "num_base_bdevs": 4, 00:10:53.385 "num_base_bdevs_discovered": 4, 00:10:53.385 "num_base_bdevs_operational": 4, 00:10:53.385 "base_bdevs_list": [ 00:10:53.385 { 00:10:53.385 "name": "NewBaseBdev", 00:10:53.385 "uuid": "0ad5e3ae-9eed-41e7-acc6-64cb7ab55b7b", 00:10:53.385 "is_configured": true, 00:10:53.385 "data_offset": 0, 00:10:53.385 "data_size": 65536 00:10:53.385 }, 00:10:53.385 { 00:10:53.385 "name": "BaseBdev2", 00:10:53.385 "uuid": "9ebb4ef1-7405-4513-bc57-deda11dea430", 00:10:53.385 "is_configured": true, 00:10:53.385 "data_offset": 0, 00:10:53.385 "data_size": 65536 00:10:53.385 }, 00:10:53.385 { 00:10:53.385 "name": "BaseBdev3", 00:10:53.385 "uuid": "d09cae27-fe06-4270-b6ce-c6199c4ca9f3", 00:10:53.385 "is_configured": true, 00:10:53.385 "data_offset": 0, 00:10:53.385 "data_size": 65536 00:10:53.385 }, 00:10:53.385 { 00:10:53.385 "name": "BaseBdev4", 00:10:53.385 "uuid": "7189bb0a-0421-43d7-aef4-3b0e7434bc04", 00:10:53.385 "is_configured": true, 00:10:53.385 "data_offset": 0, 00:10:53.385 "data_size": 65536 00:10:53.385 } 00:10:53.385 ] 00:10:53.385 } 00:10:53.385 } 00:10:53.385 }' 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:53.385 BaseBdev2 00:10:53.385 BaseBdev3 00:10:53.385 BaseBdev4' 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.385 15:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.385 15:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.385 15:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:53.385 15:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.385 15:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.385 15:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.385 15:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.385 15:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.385 15:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.385 15:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.385 15:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:53.385 15:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.385 15:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.645 15:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.645 15:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.645 15:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.645 15:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.645 15:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.645 15:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.645 [2024-11-25 15:37:52.104275] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.645 [2024-11-25 15:37:52.104352] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.645 [2024-11-25 15:37:52.104451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.645 [2024-11-25 15:37:52.104535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.645 [2024-11-25 15:37:52.104548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:53.645 15:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.645 15:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69133 00:10:53.645 15:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69133 ']' 00:10:53.645 15:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69133 00:10:53.645 15:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:53.645 15:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.645 15:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69133 00:10:53.645 15:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.645 15:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.645 killing process with pid 69133 00:10:53.645 15:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69133' 00:10:53.645 15:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69133 00:10:53.645 [2024-11-25 15:37:52.152697] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.645 15:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69133 00:10:53.904 [2024-11-25 15:37:52.547700] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:55.283 00:10:55.283 real 0m11.421s 00:10:55.283 user 0m18.213s 00:10:55.283 sys 0m1.971s 00:10:55.283 ************************************ 00:10:55.283 END TEST raid_state_function_test 00:10:55.283 ************************************ 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.283 15:37:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:55.283 15:37:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:55.283 15:37:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.283 15:37:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:55.283 ************************************ 00:10:55.283 START TEST raid_state_function_test_sb 00:10:55.283 ************************************ 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69805 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69805' 00:10:55.283 Process raid pid: 69805 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69805 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69805 ']' 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.283 15:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.283 [2024-11-25 15:37:53.819368] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:10:55.283 [2024-11-25 15:37:53.819561] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.541 [2024-11-25 15:37:53.977236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.541 [2024-11-25 15:37:54.090347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.800 [2024-11-25 15:37:54.293835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.800 [2024-11-25 15:37:54.293957] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.059 [2024-11-25 15:37:54.639258] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:56.059 [2024-11-25 15:37:54.639363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:56.059 [2024-11-25 15:37:54.639398] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:56.059 [2024-11-25 15:37:54.639425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:56.059 [2024-11-25 15:37:54.639447] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:56.059 [2024-11-25 15:37:54.639471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:56.059 [2024-11-25 15:37:54.639492] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:56.059 [2024-11-25 15:37:54.639561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.059 "name": "Existed_Raid", 00:10:56.059 "uuid": "438ebf8a-b531-4554-bfb0-d9fdee197396", 00:10:56.059 "strip_size_kb": 64, 00:10:56.059 "state": "configuring", 00:10:56.059 "raid_level": "raid0", 00:10:56.059 "superblock": true, 00:10:56.059 "num_base_bdevs": 4, 00:10:56.059 "num_base_bdevs_discovered": 0, 00:10:56.059 "num_base_bdevs_operational": 4, 00:10:56.059 "base_bdevs_list": [ 00:10:56.059 { 00:10:56.059 "name": "BaseBdev1", 00:10:56.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.059 "is_configured": false, 00:10:56.059 "data_offset": 0, 00:10:56.059 "data_size": 0 00:10:56.059 }, 00:10:56.059 { 00:10:56.059 "name": "BaseBdev2", 00:10:56.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.059 "is_configured": false, 00:10:56.059 "data_offset": 0, 00:10:56.059 "data_size": 0 00:10:56.059 }, 00:10:56.059 { 00:10:56.059 "name": "BaseBdev3", 00:10:56.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.059 "is_configured": false, 00:10:56.059 "data_offset": 0, 00:10:56.059 "data_size": 0 00:10:56.059 }, 00:10:56.059 { 00:10:56.059 "name": "BaseBdev4", 00:10:56.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.059 "is_configured": false, 00:10:56.059 "data_offset": 0, 00:10:56.059 "data_size": 0 00:10:56.059 } 00:10:56.059 ] 00:10:56.059 }' 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.059 15:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.628 [2024-11-25 15:37:55.090412] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:56.628 [2024-11-25 15:37:55.090450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.628 [2024-11-25 15:37:55.098399] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:56.628 [2024-11-25 15:37:55.098440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:56.628 [2024-11-25 15:37:55.098450] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:56.628 [2024-11-25 15:37:55.098459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:56.628 [2024-11-25 15:37:55.098465] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:56.628 [2024-11-25 15:37:55.098475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:56.628 [2024-11-25 15:37:55.098481] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:56.628 [2024-11-25 15:37:55.098489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.628 [2024-11-25 15:37:55.143168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:56.628 BaseBdev1 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.628 [ 00:10:56.628 { 00:10:56.628 "name": "BaseBdev1", 00:10:56.628 "aliases": [ 00:10:56.628 "b72264e9-7080-48b2-80f0-8da210dc0dbd" 00:10:56.628 ], 00:10:56.628 "product_name": "Malloc disk", 00:10:56.628 "block_size": 512, 00:10:56.628 "num_blocks": 65536, 00:10:56.628 "uuid": "b72264e9-7080-48b2-80f0-8da210dc0dbd", 00:10:56.628 "assigned_rate_limits": { 00:10:56.628 "rw_ios_per_sec": 0, 00:10:56.628 "rw_mbytes_per_sec": 0, 00:10:56.628 "r_mbytes_per_sec": 0, 00:10:56.628 "w_mbytes_per_sec": 0 00:10:56.628 }, 00:10:56.628 "claimed": true, 00:10:56.628 "claim_type": "exclusive_write", 00:10:56.628 "zoned": false, 00:10:56.628 "supported_io_types": { 00:10:56.628 "read": true, 00:10:56.628 "write": true, 00:10:56.628 "unmap": true, 00:10:56.628 "flush": true, 00:10:56.628 "reset": true, 00:10:56.628 "nvme_admin": false, 00:10:56.628 "nvme_io": false, 00:10:56.628 "nvme_io_md": false, 00:10:56.628 "write_zeroes": true, 00:10:56.628 "zcopy": true, 00:10:56.628 "get_zone_info": false, 00:10:56.628 "zone_management": false, 00:10:56.628 "zone_append": false, 00:10:56.628 "compare": false, 00:10:56.628 "compare_and_write": false, 00:10:56.628 "abort": true, 00:10:56.628 "seek_hole": false, 00:10:56.628 "seek_data": false, 00:10:56.628 "copy": true, 00:10:56.628 "nvme_iov_md": false 00:10:56.628 }, 00:10:56.628 "memory_domains": [ 00:10:56.628 { 00:10:56.628 "dma_device_id": "system", 00:10:56.628 "dma_device_type": 1 00:10:56.628 }, 00:10:56.628 { 00:10:56.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.628 "dma_device_type": 2 00:10:56.628 } 00:10:56.628 ], 00:10:56.628 "driver_specific": {} 00:10:56.628 } 00:10:56.628 ] 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.628 "name": "Existed_Raid", 00:10:56.628 "uuid": "43bc63c6-8264-43a4-9506-73b567552b07", 00:10:56.628 "strip_size_kb": 64, 00:10:56.628 "state": "configuring", 00:10:56.628 "raid_level": "raid0", 00:10:56.628 "superblock": true, 00:10:56.628 "num_base_bdevs": 4, 00:10:56.628 "num_base_bdevs_discovered": 1, 00:10:56.628 "num_base_bdevs_operational": 4, 00:10:56.628 "base_bdevs_list": [ 00:10:56.628 { 00:10:56.628 "name": "BaseBdev1", 00:10:56.628 "uuid": "b72264e9-7080-48b2-80f0-8da210dc0dbd", 00:10:56.628 "is_configured": true, 00:10:56.628 "data_offset": 2048, 00:10:56.628 "data_size": 63488 00:10:56.628 }, 00:10:56.628 { 00:10:56.628 "name": "BaseBdev2", 00:10:56.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.628 "is_configured": false, 00:10:56.628 "data_offset": 0, 00:10:56.628 "data_size": 0 00:10:56.628 }, 00:10:56.628 { 00:10:56.628 "name": "BaseBdev3", 00:10:56.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.628 "is_configured": false, 00:10:56.628 "data_offset": 0, 00:10:56.628 "data_size": 0 00:10:56.628 }, 00:10:56.628 { 00:10:56.628 "name": "BaseBdev4", 00:10:56.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.628 "is_configured": false, 00:10:56.628 "data_offset": 0, 00:10:56.628 "data_size": 0 00:10:56.628 } 00:10:56.628 ] 00:10:56.628 }' 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.628 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.197 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:57.197 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.197 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.197 [2024-11-25 15:37:55.598470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:57.197 [2024-11-25 15:37:55.598525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:57.197 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.197 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:57.197 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.197 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.197 [2024-11-25 15:37:55.606520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.197 [2024-11-25 15:37:55.608346] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:57.197 [2024-11-25 15:37:55.608386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:57.197 [2024-11-25 15:37:55.608396] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:57.197 [2024-11-25 15:37:55.608406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:57.197 [2024-11-25 15:37:55.608413] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:57.197 [2024-11-25 15:37:55.608421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:57.197 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.198 "name": "Existed_Raid", 00:10:57.198 "uuid": "8538ae6b-2b1f-4814-929e-6859deb2086c", 00:10:57.198 "strip_size_kb": 64, 00:10:57.198 "state": "configuring", 00:10:57.198 "raid_level": "raid0", 00:10:57.198 "superblock": true, 00:10:57.198 "num_base_bdevs": 4, 00:10:57.198 "num_base_bdevs_discovered": 1, 00:10:57.198 "num_base_bdevs_operational": 4, 00:10:57.198 "base_bdevs_list": [ 00:10:57.198 { 00:10:57.198 "name": "BaseBdev1", 00:10:57.198 "uuid": "b72264e9-7080-48b2-80f0-8da210dc0dbd", 00:10:57.198 "is_configured": true, 00:10:57.198 "data_offset": 2048, 00:10:57.198 "data_size": 63488 00:10:57.198 }, 00:10:57.198 { 00:10:57.198 "name": "BaseBdev2", 00:10:57.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.198 "is_configured": false, 00:10:57.198 "data_offset": 0, 00:10:57.198 "data_size": 0 00:10:57.198 }, 00:10:57.198 { 00:10:57.198 "name": "BaseBdev3", 00:10:57.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.198 "is_configured": false, 00:10:57.198 "data_offset": 0, 00:10:57.198 "data_size": 0 00:10:57.198 }, 00:10:57.198 { 00:10:57.198 "name": "BaseBdev4", 00:10:57.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.198 "is_configured": false, 00:10:57.198 "data_offset": 0, 00:10:57.198 "data_size": 0 00:10:57.198 } 00:10:57.198 ] 00:10:57.198 }' 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.198 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.457 15:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:57.457 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.457 15:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.457 [2024-11-25 15:37:56.040849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.457 BaseBdev2 00:10:57.457 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.457 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:57.457 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:57.457 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.457 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:57.457 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.457 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.457 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.457 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.457 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.457 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.457 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:57.457 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.457 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.457 [ 00:10:57.457 { 00:10:57.457 "name": "BaseBdev2", 00:10:57.457 "aliases": [ 00:10:57.457 "890a2965-101c-4ef3-ace6-e28a092c57ab" 00:10:57.457 ], 00:10:57.457 "product_name": "Malloc disk", 00:10:57.457 "block_size": 512, 00:10:57.457 "num_blocks": 65536, 00:10:57.457 "uuid": "890a2965-101c-4ef3-ace6-e28a092c57ab", 00:10:57.457 "assigned_rate_limits": { 00:10:57.457 "rw_ios_per_sec": 0, 00:10:57.457 "rw_mbytes_per_sec": 0, 00:10:57.457 "r_mbytes_per_sec": 0, 00:10:57.457 "w_mbytes_per_sec": 0 00:10:57.457 }, 00:10:57.457 "claimed": true, 00:10:57.457 "claim_type": "exclusive_write", 00:10:57.457 "zoned": false, 00:10:57.457 "supported_io_types": { 00:10:57.457 "read": true, 00:10:57.457 "write": true, 00:10:57.457 "unmap": true, 00:10:57.457 "flush": true, 00:10:57.457 "reset": true, 00:10:57.457 "nvme_admin": false, 00:10:57.457 "nvme_io": false, 00:10:57.457 "nvme_io_md": false, 00:10:57.457 "write_zeroes": true, 00:10:57.457 "zcopy": true, 00:10:57.457 "get_zone_info": false, 00:10:57.457 "zone_management": false, 00:10:57.457 "zone_append": false, 00:10:57.457 "compare": false, 00:10:57.457 "compare_and_write": false, 00:10:57.457 "abort": true, 00:10:57.457 "seek_hole": false, 00:10:57.457 "seek_data": false, 00:10:57.457 "copy": true, 00:10:57.457 "nvme_iov_md": false 00:10:57.457 }, 00:10:57.457 "memory_domains": [ 00:10:57.457 { 00:10:57.457 "dma_device_id": "system", 00:10:57.457 "dma_device_type": 1 00:10:57.457 }, 00:10:57.457 { 00:10:57.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.457 "dma_device_type": 2 00:10:57.457 } 00:10:57.457 ], 00:10:57.457 "driver_specific": {} 00:10:57.457 } 00:10:57.458 ] 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.458 "name": "Existed_Raid", 00:10:57.458 "uuid": "8538ae6b-2b1f-4814-929e-6859deb2086c", 00:10:57.458 "strip_size_kb": 64, 00:10:57.458 "state": "configuring", 00:10:57.458 "raid_level": "raid0", 00:10:57.458 "superblock": true, 00:10:57.458 "num_base_bdevs": 4, 00:10:57.458 "num_base_bdevs_discovered": 2, 00:10:57.458 "num_base_bdevs_operational": 4, 00:10:57.458 "base_bdevs_list": [ 00:10:57.458 { 00:10:57.458 "name": "BaseBdev1", 00:10:57.458 "uuid": "b72264e9-7080-48b2-80f0-8da210dc0dbd", 00:10:57.458 "is_configured": true, 00:10:57.458 "data_offset": 2048, 00:10:57.458 "data_size": 63488 00:10:57.458 }, 00:10:57.458 { 00:10:57.458 "name": "BaseBdev2", 00:10:57.458 "uuid": "890a2965-101c-4ef3-ace6-e28a092c57ab", 00:10:57.458 "is_configured": true, 00:10:57.458 "data_offset": 2048, 00:10:57.458 "data_size": 63488 00:10:57.458 }, 00:10:57.458 { 00:10:57.458 "name": "BaseBdev3", 00:10:57.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.458 "is_configured": false, 00:10:57.458 "data_offset": 0, 00:10:57.458 "data_size": 0 00:10:57.458 }, 00:10:57.458 { 00:10:57.458 "name": "BaseBdev4", 00:10:57.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.458 "is_configured": false, 00:10:57.458 "data_offset": 0, 00:10:57.458 "data_size": 0 00:10:57.458 } 00:10:57.458 ] 00:10:57.458 }' 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.458 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.026 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:58.026 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.026 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.026 [2024-11-25 15:37:56.569879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.026 BaseBdev3 00:10:58.026 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.026 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:58.026 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:58.026 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.026 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:58.026 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.026 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.026 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:58.026 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.026 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.026 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.026 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:58.026 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.026 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.026 [ 00:10:58.026 { 00:10:58.026 "name": "BaseBdev3", 00:10:58.026 "aliases": [ 00:10:58.026 "1baa56c9-9914-452d-be3e-62a96c2fa439" 00:10:58.026 ], 00:10:58.026 "product_name": "Malloc disk", 00:10:58.026 "block_size": 512, 00:10:58.026 "num_blocks": 65536, 00:10:58.026 "uuid": "1baa56c9-9914-452d-be3e-62a96c2fa439", 00:10:58.026 "assigned_rate_limits": { 00:10:58.026 "rw_ios_per_sec": 0, 00:10:58.026 "rw_mbytes_per_sec": 0, 00:10:58.026 "r_mbytes_per_sec": 0, 00:10:58.026 "w_mbytes_per_sec": 0 00:10:58.026 }, 00:10:58.026 "claimed": true, 00:10:58.026 "claim_type": "exclusive_write", 00:10:58.026 "zoned": false, 00:10:58.026 "supported_io_types": { 00:10:58.026 "read": true, 00:10:58.026 "write": true, 00:10:58.026 "unmap": true, 00:10:58.026 "flush": true, 00:10:58.026 "reset": true, 00:10:58.026 "nvme_admin": false, 00:10:58.026 "nvme_io": false, 00:10:58.026 "nvme_io_md": false, 00:10:58.026 "write_zeroes": true, 00:10:58.026 "zcopy": true, 00:10:58.026 "get_zone_info": false, 00:10:58.026 "zone_management": false, 00:10:58.026 "zone_append": false, 00:10:58.026 "compare": false, 00:10:58.026 "compare_and_write": false, 00:10:58.026 "abort": true, 00:10:58.026 "seek_hole": false, 00:10:58.026 "seek_data": false, 00:10:58.026 "copy": true, 00:10:58.026 "nvme_iov_md": false 00:10:58.026 }, 00:10:58.026 "memory_domains": [ 00:10:58.026 { 00:10:58.026 "dma_device_id": "system", 00:10:58.026 "dma_device_type": 1 00:10:58.026 }, 00:10:58.026 { 00:10:58.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.026 "dma_device_type": 2 00:10:58.026 } 00:10:58.027 ], 00:10:58.027 "driver_specific": {} 00:10:58.027 } 00:10:58.027 ] 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.027 "name": "Existed_Raid", 00:10:58.027 "uuid": "8538ae6b-2b1f-4814-929e-6859deb2086c", 00:10:58.027 "strip_size_kb": 64, 00:10:58.027 "state": "configuring", 00:10:58.027 "raid_level": "raid0", 00:10:58.027 "superblock": true, 00:10:58.027 "num_base_bdevs": 4, 00:10:58.027 "num_base_bdevs_discovered": 3, 00:10:58.027 "num_base_bdevs_operational": 4, 00:10:58.027 "base_bdevs_list": [ 00:10:58.027 { 00:10:58.027 "name": "BaseBdev1", 00:10:58.027 "uuid": "b72264e9-7080-48b2-80f0-8da210dc0dbd", 00:10:58.027 "is_configured": true, 00:10:58.027 "data_offset": 2048, 00:10:58.027 "data_size": 63488 00:10:58.027 }, 00:10:58.027 { 00:10:58.027 "name": "BaseBdev2", 00:10:58.027 "uuid": "890a2965-101c-4ef3-ace6-e28a092c57ab", 00:10:58.027 "is_configured": true, 00:10:58.027 "data_offset": 2048, 00:10:58.027 "data_size": 63488 00:10:58.027 }, 00:10:58.027 { 00:10:58.027 "name": "BaseBdev3", 00:10:58.027 "uuid": "1baa56c9-9914-452d-be3e-62a96c2fa439", 00:10:58.027 "is_configured": true, 00:10:58.027 "data_offset": 2048, 00:10:58.027 "data_size": 63488 00:10:58.027 }, 00:10:58.027 { 00:10:58.027 "name": "BaseBdev4", 00:10:58.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.027 "is_configured": false, 00:10:58.027 "data_offset": 0, 00:10:58.027 "data_size": 0 00:10:58.027 } 00:10:58.027 ] 00:10:58.027 }' 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.027 15:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.595 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:58.595 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.595 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.595 [2024-11-25 15:37:57.064819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:58.595 [2024-11-25 15:37:57.065112] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:58.595 [2024-11-25 15:37:57.065131] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:58.595 [2024-11-25 15:37:57.065397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:58.595 [2024-11-25 15:37:57.065559] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:58.595 [2024-11-25 15:37:57.065572] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:58.595 BaseBdev4 00:10:58.595 [2024-11-25 15:37:57.065729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.595 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.595 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:58.595 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:58.595 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.595 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:58.595 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.595 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.595 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:58.595 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.595 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.595 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.595 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:58.595 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.595 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.595 [ 00:10:58.595 { 00:10:58.595 "name": "BaseBdev4", 00:10:58.595 "aliases": [ 00:10:58.595 "4445112a-9d0b-4b98-9cde-0e4bf531b574" 00:10:58.595 ], 00:10:58.595 "product_name": "Malloc disk", 00:10:58.595 "block_size": 512, 00:10:58.595 "num_blocks": 65536, 00:10:58.595 "uuid": "4445112a-9d0b-4b98-9cde-0e4bf531b574", 00:10:58.595 "assigned_rate_limits": { 00:10:58.595 "rw_ios_per_sec": 0, 00:10:58.595 "rw_mbytes_per_sec": 0, 00:10:58.595 "r_mbytes_per_sec": 0, 00:10:58.595 "w_mbytes_per_sec": 0 00:10:58.595 }, 00:10:58.595 "claimed": true, 00:10:58.595 "claim_type": "exclusive_write", 00:10:58.595 "zoned": false, 00:10:58.595 "supported_io_types": { 00:10:58.595 "read": true, 00:10:58.595 "write": true, 00:10:58.595 "unmap": true, 00:10:58.595 "flush": true, 00:10:58.595 "reset": true, 00:10:58.595 "nvme_admin": false, 00:10:58.595 "nvme_io": false, 00:10:58.595 "nvme_io_md": false, 00:10:58.595 "write_zeroes": true, 00:10:58.595 "zcopy": true, 00:10:58.595 "get_zone_info": false, 00:10:58.595 "zone_management": false, 00:10:58.595 "zone_append": false, 00:10:58.595 "compare": false, 00:10:58.595 "compare_and_write": false, 00:10:58.595 "abort": true, 00:10:58.595 "seek_hole": false, 00:10:58.595 "seek_data": false, 00:10:58.595 "copy": true, 00:10:58.596 "nvme_iov_md": false 00:10:58.596 }, 00:10:58.596 "memory_domains": [ 00:10:58.596 { 00:10:58.596 "dma_device_id": "system", 00:10:58.596 "dma_device_type": 1 00:10:58.596 }, 00:10:58.596 { 00:10:58.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.596 "dma_device_type": 2 00:10:58.596 } 00:10:58.596 ], 00:10:58.596 "driver_specific": {} 00:10:58.596 } 00:10:58.596 ] 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.596 "name": "Existed_Raid", 00:10:58.596 "uuid": "8538ae6b-2b1f-4814-929e-6859deb2086c", 00:10:58.596 "strip_size_kb": 64, 00:10:58.596 "state": "online", 00:10:58.596 "raid_level": "raid0", 00:10:58.596 "superblock": true, 00:10:58.596 "num_base_bdevs": 4, 00:10:58.596 "num_base_bdevs_discovered": 4, 00:10:58.596 "num_base_bdevs_operational": 4, 00:10:58.596 "base_bdevs_list": [ 00:10:58.596 { 00:10:58.596 "name": "BaseBdev1", 00:10:58.596 "uuid": "b72264e9-7080-48b2-80f0-8da210dc0dbd", 00:10:58.596 "is_configured": true, 00:10:58.596 "data_offset": 2048, 00:10:58.596 "data_size": 63488 00:10:58.596 }, 00:10:58.596 { 00:10:58.596 "name": "BaseBdev2", 00:10:58.596 "uuid": "890a2965-101c-4ef3-ace6-e28a092c57ab", 00:10:58.596 "is_configured": true, 00:10:58.596 "data_offset": 2048, 00:10:58.596 "data_size": 63488 00:10:58.596 }, 00:10:58.596 { 00:10:58.596 "name": "BaseBdev3", 00:10:58.596 "uuid": "1baa56c9-9914-452d-be3e-62a96c2fa439", 00:10:58.596 "is_configured": true, 00:10:58.596 "data_offset": 2048, 00:10:58.596 "data_size": 63488 00:10:58.596 }, 00:10:58.596 { 00:10:58.596 "name": "BaseBdev4", 00:10:58.596 "uuid": "4445112a-9d0b-4b98-9cde-0e4bf531b574", 00:10:58.596 "is_configured": true, 00:10:58.596 "data_offset": 2048, 00:10:58.596 "data_size": 63488 00:10:58.596 } 00:10:58.596 ] 00:10:58.596 }' 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.596 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.855 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:58.855 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:58.855 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:58.855 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:58.855 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:58.855 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:58.855 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:58.855 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:58.855 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.855 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.855 [2024-11-25 15:37:57.516445] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.113 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.113 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:59.113 "name": "Existed_Raid", 00:10:59.113 "aliases": [ 00:10:59.113 "8538ae6b-2b1f-4814-929e-6859deb2086c" 00:10:59.113 ], 00:10:59.113 "product_name": "Raid Volume", 00:10:59.113 "block_size": 512, 00:10:59.113 "num_blocks": 253952, 00:10:59.113 "uuid": "8538ae6b-2b1f-4814-929e-6859deb2086c", 00:10:59.113 "assigned_rate_limits": { 00:10:59.113 "rw_ios_per_sec": 0, 00:10:59.113 "rw_mbytes_per_sec": 0, 00:10:59.114 "r_mbytes_per_sec": 0, 00:10:59.114 "w_mbytes_per_sec": 0 00:10:59.114 }, 00:10:59.114 "claimed": false, 00:10:59.114 "zoned": false, 00:10:59.114 "supported_io_types": { 00:10:59.114 "read": true, 00:10:59.114 "write": true, 00:10:59.114 "unmap": true, 00:10:59.114 "flush": true, 00:10:59.114 "reset": true, 00:10:59.114 "nvme_admin": false, 00:10:59.114 "nvme_io": false, 00:10:59.114 "nvme_io_md": false, 00:10:59.114 "write_zeroes": true, 00:10:59.114 "zcopy": false, 00:10:59.114 "get_zone_info": false, 00:10:59.114 "zone_management": false, 00:10:59.114 "zone_append": false, 00:10:59.114 "compare": false, 00:10:59.114 "compare_and_write": false, 00:10:59.114 "abort": false, 00:10:59.114 "seek_hole": false, 00:10:59.114 "seek_data": false, 00:10:59.114 "copy": false, 00:10:59.114 "nvme_iov_md": false 00:10:59.114 }, 00:10:59.114 "memory_domains": [ 00:10:59.114 { 00:10:59.114 "dma_device_id": "system", 00:10:59.114 "dma_device_type": 1 00:10:59.114 }, 00:10:59.114 { 00:10:59.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.114 "dma_device_type": 2 00:10:59.114 }, 00:10:59.114 { 00:10:59.114 "dma_device_id": "system", 00:10:59.114 "dma_device_type": 1 00:10:59.114 }, 00:10:59.114 { 00:10:59.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.114 "dma_device_type": 2 00:10:59.114 }, 00:10:59.114 { 00:10:59.114 "dma_device_id": "system", 00:10:59.114 "dma_device_type": 1 00:10:59.114 }, 00:10:59.114 { 00:10:59.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.114 "dma_device_type": 2 00:10:59.114 }, 00:10:59.114 { 00:10:59.114 "dma_device_id": "system", 00:10:59.114 "dma_device_type": 1 00:10:59.114 }, 00:10:59.114 { 00:10:59.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.114 "dma_device_type": 2 00:10:59.114 } 00:10:59.114 ], 00:10:59.114 "driver_specific": { 00:10:59.114 "raid": { 00:10:59.114 "uuid": "8538ae6b-2b1f-4814-929e-6859deb2086c", 00:10:59.114 "strip_size_kb": 64, 00:10:59.114 "state": "online", 00:10:59.114 "raid_level": "raid0", 00:10:59.114 "superblock": true, 00:10:59.114 "num_base_bdevs": 4, 00:10:59.114 "num_base_bdevs_discovered": 4, 00:10:59.114 "num_base_bdevs_operational": 4, 00:10:59.114 "base_bdevs_list": [ 00:10:59.114 { 00:10:59.114 "name": "BaseBdev1", 00:10:59.114 "uuid": "b72264e9-7080-48b2-80f0-8da210dc0dbd", 00:10:59.114 "is_configured": true, 00:10:59.114 "data_offset": 2048, 00:10:59.114 "data_size": 63488 00:10:59.114 }, 00:10:59.114 { 00:10:59.114 "name": "BaseBdev2", 00:10:59.114 "uuid": "890a2965-101c-4ef3-ace6-e28a092c57ab", 00:10:59.114 "is_configured": true, 00:10:59.114 "data_offset": 2048, 00:10:59.114 "data_size": 63488 00:10:59.114 }, 00:10:59.114 { 00:10:59.114 "name": "BaseBdev3", 00:10:59.114 "uuid": "1baa56c9-9914-452d-be3e-62a96c2fa439", 00:10:59.114 "is_configured": true, 00:10:59.114 "data_offset": 2048, 00:10:59.114 "data_size": 63488 00:10:59.114 }, 00:10:59.114 { 00:10:59.114 "name": "BaseBdev4", 00:10:59.114 "uuid": "4445112a-9d0b-4b98-9cde-0e4bf531b574", 00:10:59.114 "is_configured": true, 00:10:59.114 "data_offset": 2048, 00:10:59.114 "data_size": 63488 00:10:59.114 } 00:10:59.114 ] 00:10:59.114 } 00:10:59.114 } 00:10:59.114 }' 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:59.114 BaseBdev2 00:10:59.114 BaseBdev3 00:10:59.114 BaseBdev4' 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.114 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.373 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.373 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.373 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.373 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:59.373 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.373 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.373 [2024-11-25 15:37:57.839594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:59.373 [2024-11-25 15:37:57.839672] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.373 [2024-11-25 15:37:57.839752] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.373 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.373 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:59.373 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:59.373 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:59.373 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:59.373 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:59.373 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:59.373 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.374 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:59.374 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.374 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.374 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.374 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.374 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.374 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.374 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.374 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.374 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.374 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.374 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.374 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.374 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.374 "name": "Existed_Raid", 00:10:59.374 "uuid": "8538ae6b-2b1f-4814-929e-6859deb2086c", 00:10:59.374 "strip_size_kb": 64, 00:10:59.374 "state": "offline", 00:10:59.374 "raid_level": "raid0", 00:10:59.374 "superblock": true, 00:10:59.374 "num_base_bdevs": 4, 00:10:59.374 "num_base_bdevs_discovered": 3, 00:10:59.374 "num_base_bdevs_operational": 3, 00:10:59.374 "base_bdevs_list": [ 00:10:59.374 { 00:10:59.374 "name": null, 00:10:59.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.374 "is_configured": false, 00:10:59.374 "data_offset": 0, 00:10:59.374 "data_size": 63488 00:10:59.374 }, 00:10:59.374 { 00:10:59.374 "name": "BaseBdev2", 00:10:59.374 "uuid": "890a2965-101c-4ef3-ace6-e28a092c57ab", 00:10:59.374 "is_configured": true, 00:10:59.374 "data_offset": 2048, 00:10:59.374 "data_size": 63488 00:10:59.374 }, 00:10:59.374 { 00:10:59.374 "name": "BaseBdev3", 00:10:59.374 "uuid": "1baa56c9-9914-452d-be3e-62a96c2fa439", 00:10:59.374 "is_configured": true, 00:10:59.374 "data_offset": 2048, 00:10:59.374 "data_size": 63488 00:10:59.374 }, 00:10:59.374 { 00:10:59.374 "name": "BaseBdev4", 00:10:59.374 "uuid": "4445112a-9d0b-4b98-9cde-0e4bf531b574", 00:10:59.374 "is_configured": true, 00:10:59.374 "data_offset": 2048, 00:10:59.374 "data_size": 63488 00:10:59.374 } 00:10:59.374 ] 00:10:59.374 }' 00:10:59.374 15:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.374 15:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.941 [2024-11-25 15:37:58.411615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.941 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.941 [2024-11-25 15:37:58.558946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.200 [2024-11-25 15:37:58.695685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:00.200 [2024-11-25 15:37:58.695733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.200 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.459 BaseBdev2 00:11:00.459 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.459 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:00.459 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:00.459 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:00.459 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:00.459 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:00.459 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:00.459 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:00.459 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.459 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.459 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.459 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:00.459 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.459 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.459 [ 00:11:00.459 { 00:11:00.459 "name": "BaseBdev2", 00:11:00.459 "aliases": [ 00:11:00.459 "1c0bdefa-1559-4fd0-8b95-e9e17165e4f0" 00:11:00.459 ], 00:11:00.459 "product_name": "Malloc disk", 00:11:00.459 "block_size": 512, 00:11:00.459 "num_blocks": 65536, 00:11:00.459 "uuid": "1c0bdefa-1559-4fd0-8b95-e9e17165e4f0", 00:11:00.459 "assigned_rate_limits": { 00:11:00.459 "rw_ios_per_sec": 0, 00:11:00.459 "rw_mbytes_per_sec": 0, 00:11:00.459 "r_mbytes_per_sec": 0, 00:11:00.459 "w_mbytes_per_sec": 0 00:11:00.459 }, 00:11:00.459 "claimed": false, 00:11:00.459 "zoned": false, 00:11:00.459 "supported_io_types": { 00:11:00.459 "read": true, 00:11:00.459 "write": true, 00:11:00.460 "unmap": true, 00:11:00.460 "flush": true, 00:11:00.460 "reset": true, 00:11:00.460 "nvme_admin": false, 00:11:00.460 "nvme_io": false, 00:11:00.460 "nvme_io_md": false, 00:11:00.460 "write_zeroes": true, 00:11:00.460 "zcopy": true, 00:11:00.460 "get_zone_info": false, 00:11:00.460 "zone_management": false, 00:11:00.460 "zone_append": false, 00:11:00.460 "compare": false, 00:11:00.460 "compare_and_write": false, 00:11:00.460 "abort": true, 00:11:00.460 "seek_hole": false, 00:11:00.460 "seek_data": false, 00:11:00.460 "copy": true, 00:11:00.460 "nvme_iov_md": false 00:11:00.460 }, 00:11:00.460 "memory_domains": [ 00:11:00.460 { 00:11:00.460 "dma_device_id": "system", 00:11:00.460 "dma_device_type": 1 00:11:00.460 }, 00:11:00.460 { 00:11:00.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.460 "dma_device_type": 2 00:11:00.460 } 00:11:00.460 ], 00:11:00.460 "driver_specific": {} 00:11:00.460 } 00:11:00.460 ] 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.460 BaseBdev3 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.460 [ 00:11:00.460 { 00:11:00.460 "name": "BaseBdev3", 00:11:00.460 "aliases": [ 00:11:00.460 "13cc1cbe-9a24-49a6-ae09-fd3570688e63" 00:11:00.460 ], 00:11:00.460 "product_name": "Malloc disk", 00:11:00.460 "block_size": 512, 00:11:00.460 "num_blocks": 65536, 00:11:00.460 "uuid": "13cc1cbe-9a24-49a6-ae09-fd3570688e63", 00:11:00.460 "assigned_rate_limits": { 00:11:00.460 "rw_ios_per_sec": 0, 00:11:00.460 "rw_mbytes_per_sec": 0, 00:11:00.460 "r_mbytes_per_sec": 0, 00:11:00.460 "w_mbytes_per_sec": 0 00:11:00.460 }, 00:11:00.460 "claimed": false, 00:11:00.460 "zoned": false, 00:11:00.460 "supported_io_types": { 00:11:00.460 "read": true, 00:11:00.460 "write": true, 00:11:00.460 "unmap": true, 00:11:00.460 "flush": true, 00:11:00.460 "reset": true, 00:11:00.460 "nvme_admin": false, 00:11:00.460 "nvme_io": false, 00:11:00.460 "nvme_io_md": false, 00:11:00.460 "write_zeroes": true, 00:11:00.460 "zcopy": true, 00:11:00.460 "get_zone_info": false, 00:11:00.460 "zone_management": false, 00:11:00.460 "zone_append": false, 00:11:00.460 "compare": false, 00:11:00.460 "compare_and_write": false, 00:11:00.460 "abort": true, 00:11:00.460 "seek_hole": false, 00:11:00.460 "seek_data": false, 00:11:00.460 "copy": true, 00:11:00.460 "nvme_iov_md": false 00:11:00.460 }, 00:11:00.460 "memory_domains": [ 00:11:00.460 { 00:11:00.460 "dma_device_id": "system", 00:11:00.460 "dma_device_type": 1 00:11:00.460 }, 00:11:00.460 { 00:11:00.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.460 "dma_device_type": 2 00:11:00.460 } 00:11:00.460 ], 00:11:00.460 "driver_specific": {} 00:11:00.460 } 00:11:00.460 ] 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:00.460 15:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.460 BaseBdev4 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.460 [ 00:11:00.460 { 00:11:00.460 "name": "BaseBdev4", 00:11:00.460 "aliases": [ 00:11:00.460 "b2b11f51-5e9d-43a0-b6f4-7d9629b1cf28" 00:11:00.460 ], 00:11:00.460 "product_name": "Malloc disk", 00:11:00.460 "block_size": 512, 00:11:00.460 "num_blocks": 65536, 00:11:00.460 "uuid": "b2b11f51-5e9d-43a0-b6f4-7d9629b1cf28", 00:11:00.460 "assigned_rate_limits": { 00:11:00.460 "rw_ios_per_sec": 0, 00:11:00.460 "rw_mbytes_per_sec": 0, 00:11:00.460 "r_mbytes_per_sec": 0, 00:11:00.460 "w_mbytes_per_sec": 0 00:11:00.460 }, 00:11:00.460 "claimed": false, 00:11:00.460 "zoned": false, 00:11:00.460 "supported_io_types": { 00:11:00.460 "read": true, 00:11:00.460 "write": true, 00:11:00.460 "unmap": true, 00:11:00.460 "flush": true, 00:11:00.460 "reset": true, 00:11:00.460 "nvme_admin": false, 00:11:00.460 "nvme_io": false, 00:11:00.460 "nvme_io_md": false, 00:11:00.460 "write_zeroes": true, 00:11:00.460 "zcopy": true, 00:11:00.460 "get_zone_info": false, 00:11:00.460 "zone_management": false, 00:11:00.460 "zone_append": false, 00:11:00.460 "compare": false, 00:11:00.460 "compare_and_write": false, 00:11:00.460 "abort": true, 00:11:00.460 "seek_hole": false, 00:11:00.460 "seek_data": false, 00:11:00.460 "copy": true, 00:11:00.460 "nvme_iov_md": false 00:11:00.460 }, 00:11:00.460 "memory_domains": [ 00:11:00.460 { 00:11:00.460 "dma_device_id": "system", 00:11:00.460 "dma_device_type": 1 00:11:00.460 }, 00:11:00.460 { 00:11:00.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.460 "dma_device_type": 2 00:11:00.460 } 00:11:00.460 ], 00:11:00.460 "driver_specific": {} 00:11:00.460 } 00:11:00.460 ] 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.460 [2024-11-25 15:37:59.083097] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:00.460 [2024-11-25 15:37:59.083206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:00.460 [2024-11-25 15:37:59.083257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.460 [2024-11-25 15:37:59.085068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:00.460 [2024-11-25 15:37:59.085159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.460 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.461 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.461 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.461 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.461 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.461 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.461 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.461 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.461 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.461 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.461 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.461 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.461 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.461 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.461 "name": "Existed_Raid", 00:11:00.461 "uuid": "38685665-e500-40a1-8654-639f50053495", 00:11:00.461 "strip_size_kb": 64, 00:11:00.461 "state": "configuring", 00:11:00.461 "raid_level": "raid0", 00:11:00.461 "superblock": true, 00:11:00.461 "num_base_bdevs": 4, 00:11:00.461 "num_base_bdevs_discovered": 3, 00:11:00.461 "num_base_bdevs_operational": 4, 00:11:00.461 "base_bdevs_list": [ 00:11:00.461 { 00:11:00.461 "name": "BaseBdev1", 00:11:00.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.461 "is_configured": false, 00:11:00.461 "data_offset": 0, 00:11:00.461 "data_size": 0 00:11:00.461 }, 00:11:00.461 { 00:11:00.461 "name": "BaseBdev2", 00:11:00.461 "uuid": "1c0bdefa-1559-4fd0-8b95-e9e17165e4f0", 00:11:00.461 "is_configured": true, 00:11:00.461 "data_offset": 2048, 00:11:00.461 "data_size": 63488 00:11:00.461 }, 00:11:00.461 { 00:11:00.461 "name": "BaseBdev3", 00:11:00.461 "uuid": "13cc1cbe-9a24-49a6-ae09-fd3570688e63", 00:11:00.461 "is_configured": true, 00:11:00.461 "data_offset": 2048, 00:11:00.461 "data_size": 63488 00:11:00.461 }, 00:11:00.461 { 00:11:00.461 "name": "BaseBdev4", 00:11:00.461 "uuid": "b2b11f51-5e9d-43a0-b6f4-7d9629b1cf28", 00:11:00.461 "is_configured": true, 00:11:00.461 "data_offset": 2048, 00:11:00.461 "data_size": 63488 00:11:00.461 } 00:11:00.461 ] 00:11:00.461 }' 00:11:00.461 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.723 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.983 [2024-11-25 15:37:59.502376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.983 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.983 "name": "Existed_Raid", 00:11:00.983 "uuid": "38685665-e500-40a1-8654-639f50053495", 00:11:00.983 "strip_size_kb": 64, 00:11:00.983 "state": "configuring", 00:11:00.983 "raid_level": "raid0", 00:11:00.983 "superblock": true, 00:11:00.983 "num_base_bdevs": 4, 00:11:00.983 "num_base_bdevs_discovered": 2, 00:11:00.983 "num_base_bdevs_operational": 4, 00:11:00.983 "base_bdevs_list": [ 00:11:00.983 { 00:11:00.983 "name": "BaseBdev1", 00:11:00.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.983 "is_configured": false, 00:11:00.983 "data_offset": 0, 00:11:00.983 "data_size": 0 00:11:00.983 }, 00:11:00.983 { 00:11:00.983 "name": null, 00:11:00.983 "uuid": "1c0bdefa-1559-4fd0-8b95-e9e17165e4f0", 00:11:00.983 "is_configured": false, 00:11:00.983 "data_offset": 0, 00:11:00.984 "data_size": 63488 00:11:00.984 }, 00:11:00.984 { 00:11:00.984 "name": "BaseBdev3", 00:11:00.984 "uuid": "13cc1cbe-9a24-49a6-ae09-fd3570688e63", 00:11:00.984 "is_configured": true, 00:11:00.984 "data_offset": 2048, 00:11:00.984 "data_size": 63488 00:11:00.984 }, 00:11:00.984 { 00:11:00.984 "name": "BaseBdev4", 00:11:00.984 "uuid": "b2b11f51-5e9d-43a0-b6f4-7d9629b1cf28", 00:11:00.984 "is_configured": true, 00:11:00.984 "data_offset": 2048, 00:11:00.984 "data_size": 63488 00:11:00.984 } 00:11:00.984 ] 00:11:00.984 }' 00:11:00.984 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.984 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.258 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.258 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.258 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.258 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:01.258 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.533 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:01.533 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:01.533 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.533 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.533 [2024-11-25 15:37:59.973550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.533 BaseBdev1 00:11:01.533 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.533 15:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:01.533 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:01.533 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.533 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:01.533 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.533 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.534 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.534 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.534 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.534 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.534 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:01.534 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.534 15:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.534 [ 00:11:01.534 { 00:11:01.534 "name": "BaseBdev1", 00:11:01.534 "aliases": [ 00:11:01.534 "c392cbe0-4572-47fc-89d8-e513b466fcee" 00:11:01.534 ], 00:11:01.534 "product_name": "Malloc disk", 00:11:01.534 "block_size": 512, 00:11:01.534 "num_blocks": 65536, 00:11:01.534 "uuid": "c392cbe0-4572-47fc-89d8-e513b466fcee", 00:11:01.534 "assigned_rate_limits": { 00:11:01.534 "rw_ios_per_sec": 0, 00:11:01.534 "rw_mbytes_per_sec": 0, 00:11:01.534 "r_mbytes_per_sec": 0, 00:11:01.534 "w_mbytes_per_sec": 0 00:11:01.534 }, 00:11:01.534 "claimed": true, 00:11:01.534 "claim_type": "exclusive_write", 00:11:01.534 "zoned": false, 00:11:01.534 "supported_io_types": { 00:11:01.534 "read": true, 00:11:01.534 "write": true, 00:11:01.534 "unmap": true, 00:11:01.534 "flush": true, 00:11:01.534 "reset": true, 00:11:01.534 "nvme_admin": false, 00:11:01.534 "nvme_io": false, 00:11:01.534 "nvme_io_md": false, 00:11:01.534 "write_zeroes": true, 00:11:01.534 "zcopy": true, 00:11:01.534 "get_zone_info": false, 00:11:01.534 "zone_management": false, 00:11:01.534 "zone_append": false, 00:11:01.534 "compare": false, 00:11:01.534 "compare_and_write": false, 00:11:01.534 "abort": true, 00:11:01.534 "seek_hole": false, 00:11:01.534 "seek_data": false, 00:11:01.534 "copy": true, 00:11:01.534 "nvme_iov_md": false 00:11:01.534 }, 00:11:01.534 "memory_domains": [ 00:11:01.534 { 00:11:01.534 "dma_device_id": "system", 00:11:01.534 "dma_device_type": 1 00:11:01.534 }, 00:11:01.534 { 00:11:01.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.534 "dma_device_type": 2 00:11:01.534 } 00:11:01.534 ], 00:11:01.534 "driver_specific": {} 00:11:01.534 } 00:11:01.534 ] 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.534 "name": "Existed_Raid", 00:11:01.534 "uuid": "38685665-e500-40a1-8654-639f50053495", 00:11:01.534 "strip_size_kb": 64, 00:11:01.534 "state": "configuring", 00:11:01.534 "raid_level": "raid0", 00:11:01.534 "superblock": true, 00:11:01.534 "num_base_bdevs": 4, 00:11:01.534 "num_base_bdevs_discovered": 3, 00:11:01.534 "num_base_bdevs_operational": 4, 00:11:01.534 "base_bdevs_list": [ 00:11:01.534 { 00:11:01.534 "name": "BaseBdev1", 00:11:01.534 "uuid": "c392cbe0-4572-47fc-89d8-e513b466fcee", 00:11:01.534 "is_configured": true, 00:11:01.534 "data_offset": 2048, 00:11:01.534 "data_size": 63488 00:11:01.534 }, 00:11:01.534 { 00:11:01.534 "name": null, 00:11:01.534 "uuid": "1c0bdefa-1559-4fd0-8b95-e9e17165e4f0", 00:11:01.534 "is_configured": false, 00:11:01.534 "data_offset": 0, 00:11:01.534 "data_size": 63488 00:11:01.534 }, 00:11:01.534 { 00:11:01.534 "name": "BaseBdev3", 00:11:01.534 "uuid": "13cc1cbe-9a24-49a6-ae09-fd3570688e63", 00:11:01.534 "is_configured": true, 00:11:01.534 "data_offset": 2048, 00:11:01.534 "data_size": 63488 00:11:01.534 }, 00:11:01.534 { 00:11:01.534 "name": "BaseBdev4", 00:11:01.534 "uuid": "b2b11f51-5e9d-43a0-b6f4-7d9629b1cf28", 00:11:01.534 "is_configured": true, 00:11:01.534 "data_offset": 2048, 00:11:01.534 "data_size": 63488 00:11:01.534 } 00:11:01.534 ] 00:11:01.534 }' 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.534 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.792 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.792 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.792 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.792 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:02.051 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.051 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:02.051 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:02.051 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.052 [2024-11-25 15:38:00.508708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.052 "name": "Existed_Raid", 00:11:02.052 "uuid": "38685665-e500-40a1-8654-639f50053495", 00:11:02.052 "strip_size_kb": 64, 00:11:02.052 "state": "configuring", 00:11:02.052 "raid_level": "raid0", 00:11:02.052 "superblock": true, 00:11:02.052 "num_base_bdevs": 4, 00:11:02.052 "num_base_bdevs_discovered": 2, 00:11:02.052 "num_base_bdevs_operational": 4, 00:11:02.052 "base_bdevs_list": [ 00:11:02.052 { 00:11:02.052 "name": "BaseBdev1", 00:11:02.052 "uuid": "c392cbe0-4572-47fc-89d8-e513b466fcee", 00:11:02.052 "is_configured": true, 00:11:02.052 "data_offset": 2048, 00:11:02.052 "data_size": 63488 00:11:02.052 }, 00:11:02.052 { 00:11:02.052 "name": null, 00:11:02.052 "uuid": "1c0bdefa-1559-4fd0-8b95-e9e17165e4f0", 00:11:02.052 "is_configured": false, 00:11:02.052 "data_offset": 0, 00:11:02.052 "data_size": 63488 00:11:02.052 }, 00:11:02.052 { 00:11:02.052 "name": null, 00:11:02.052 "uuid": "13cc1cbe-9a24-49a6-ae09-fd3570688e63", 00:11:02.052 "is_configured": false, 00:11:02.052 "data_offset": 0, 00:11:02.052 "data_size": 63488 00:11:02.052 }, 00:11:02.052 { 00:11:02.052 "name": "BaseBdev4", 00:11:02.052 "uuid": "b2b11f51-5e9d-43a0-b6f4-7d9629b1cf28", 00:11:02.052 "is_configured": true, 00:11:02.052 "data_offset": 2048, 00:11:02.052 "data_size": 63488 00:11:02.052 } 00:11:02.052 ] 00:11:02.052 }' 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.052 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.311 [2024-11-25 15:38:00.979921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.311 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.570 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.570 15:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.570 15:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.570 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.571 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.571 "name": "Existed_Raid", 00:11:02.571 "uuid": "38685665-e500-40a1-8654-639f50053495", 00:11:02.571 "strip_size_kb": 64, 00:11:02.571 "state": "configuring", 00:11:02.571 "raid_level": "raid0", 00:11:02.571 "superblock": true, 00:11:02.571 "num_base_bdevs": 4, 00:11:02.571 "num_base_bdevs_discovered": 3, 00:11:02.571 "num_base_bdevs_operational": 4, 00:11:02.571 "base_bdevs_list": [ 00:11:02.571 { 00:11:02.571 "name": "BaseBdev1", 00:11:02.571 "uuid": "c392cbe0-4572-47fc-89d8-e513b466fcee", 00:11:02.571 "is_configured": true, 00:11:02.571 "data_offset": 2048, 00:11:02.571 "data_size": 63488 00:11:02.571 }, 00:11:02.571 { 00:11:02.571 "name": null, 00:11:02.571 "uuid": "1c0bdefa-1559-4fd0-8b95-e9e17165e4f0", 00:11:02.571 "is_configured": false, 00:11:02.571 "data_offset": 0, 00:11:02.571 "data_size": 63488 00:11:02.571 }, 00:11:02.571 { 00:11:02.571 "name": "BaseBdev3", 00:11:02.571 "uuid": "13cc1cbe-9a24-49a6-ae09-fd3570688e63", 00:11:02.571 "is_configured": true, 00:11:02.571 "data_offset": 2048, 00:11:02.571 "data_size": 63488 00:11:02.571 }, 00:11:02.571 { 00:11:02.571 "name": "BaseBdev4", 00:11:02.571 "uuid": "b2b11f51-5e9d-43a0-b6f4-7d9629b1cf28", 00:11:02.571 "is_configured": true, 00:11:02.571 "data_offset": 2048, 00:11:02.571 "data_size": 63488 00:11:02.571 } 00:11:02.571 ] 00:11:02.571 }' 00:11:02.571 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.571 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.829 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.829 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.829 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.829 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:02.829 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.829 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:02.829 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:02.829 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.829 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.829 [2024-11-25 15:38:01.407240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:02.829 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.829 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:02.829 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.829 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.829 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.829 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.829 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.829 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.830 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.830 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.830 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.089 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.089 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.089 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.089 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.089 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.089 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.089 "name": "Existed_Raid", 00:11:03.089 "uuid": "38685665-e500-40a1-8654-639f50053495", 00:11:03.089 "strip_size_kb": 64, 00:11:03.089 "state": "configuring", 00:11:03.089 "raid_level": "raid0", 00:11:03.089 "superblock": true, 00:11:03.089 "num_base_bdevs": 4, 00:11:03.089 "num_base_bdevs_discovered": 2, 00:11:03.089 "num_base_bdevs_operational": 4, 00:11:03.089 "base_bdevs_list": [ 00:11:03.089 { 00:11:03.089 "name": null, 00:11:03.089 "uuid": "c392cbe0-4572-47fc-89d8-e513b466fcee", 00:11:03.089 "is_configured": false, 00:11:03.089 "data_offset": 0, 00:11:03.089 "data_size": 63488 00:11:03.089 }, 00:11:03.089 { 00:11:03.089 "name": null, 00:11:03.089 "uuid": "1c0bdefa-1559-4fd0-8b95-e9e17165e4f0", 00:11:03.089 "is_configured": false, 00:11:03.089 "data_offset": 0, 00:11:03.089 "data_size": 63488 00:11:03.089 }, 00:11:03.089 { 00:11:03.089 "name": "BaseBdev3", 00:11:03.089 "uuid": "13cc1cbe-9a24-49a6-ae09-fd3570688e63", 00:11:03.089 "is_configured": true, 00:11:03.089 "data_offset": 2048, 00:11:03.089 "data_size": 63488 00:11:03.089 }, 00:11:03.089 { 00:11:03.089 "name": "BaseBdev4", 00:11:03.089 "uuid": "b2b11f51-5e9d-43a0-b6f4-7d9629b1cf28", 00:11:03.089 "is_configured": true, 00:11:03.089 "data_offset": 2048, 00:11:03.089 "data_size": 63488 00:11:03.089 } 00:11:03.089 ] 00:11:03.089 }' 00:11:03.089 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.089 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.349 [2024-11-25 15:38:01.985166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.349 15:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.349 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.608 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.608 "name": "Existed_Raid", 00:11:03.608 "uuid": "38685665-e500-40a1-8654-639f50053495", 00:11:03.608 "strip_size_kb": 64, 00:11:03.608 "state": "configuring", 00:11:03.608 "raid_level": "raid0", 00:11:03.608 "superblock": true, 00:11:03.608 "num_base_bdevs": 4, 00:11:03.608 "num_base_bdevs_discovered": 3, 00:11:03.608 "num_base_bdevs_operational": 4, 00:11:03.608 "base_bdevs_list": [ 00:11:03.608 { 00:11:03.608 "name": null, 00:11:03.608 "uuid": "c392cbe0-4572-47fc-89d8-e513b466fcee", 00:11:03.608 "is_configured": false, 00:11:03.608 "data_offset": 0, 00:11:03.608 "data_size": 63488 00:11:03.608 }, 00:11:03.608 { 00:11:03.608 "name": "BaseBdev2", 00:11:03.608 "uuid": "1c0bdefa-1559-4fd0-8b95-e9e17165e4f0", 00:11:03.608 "is_configured": true, 00:11:03.608 "data_offset": 2048, 00:11:03.608 "data_size": 63488 00:11:03.608 }, 00:11:03.608 { 00:11:03.608 "name": "BaseBdev3", 00:11:03.608 "uuid": "13cc1cbe-9a24-49a6-ae09-fd3570688e63", 00:11:03.608 "is_configured": true, 00:11:03.608 "data_offset": 2048, 00:11:03.608 "data_size": 63488 00:11:03.608 }, 00:11:03.608 { 00:11:03.608 "name": "BaseBdev4", 00:11:03.608 "uuid": "b2b11f51-5e9d-43a0-b6f4-7d9629b1cf28", 00:11:03.608 "is_configured": true, 00:11:03.608 "data_offset": 2048, 00:11:03.608 "data_size": 63488 00:11:03.608 } 00:11:03.608 ] 00:11:03.608 }' 00:11:03.608 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.608 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.868 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.868 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.868 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.868 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:03.868 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.868 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:03.868 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:03.868 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.868 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.868 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.868 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.868 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c392cbe0-4572-47fc-89d8-e513b466fcee 00:11:03.868 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.868 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.868 [2024-11-25 15:38:02.544327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:03.868 [2024-11-25 15:38:02.544617] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:03.868 [2024-11-25 15:38:02.544664] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:03.868 [2024-11-25 15:38:02.544939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:03.868 [2024-11-25 15:38:02.545143] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:03.868 [2024-11-25 15:38:02.545191] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:03.868 NewBaseBdev 00:11:03.868 [2024-11-25 15:38:02.545377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.868 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.868 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:03.868 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:03.868 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.128 [ 00:11:04.128 { 00:11:04.128 "name": "NewBaseBdev", 00:11:04.128 "aliases": [ 00:11:04.128 "c392cbe0-4572-47fc-89d8-e513b466fcee" 00:11:04.128 ], 00:11:04.128 "product_name": "Malloc disk", 00:11:04.128 "block_size": 512, 00:11:04.128 "num_blocks": 65536, 00:11:04.128 "uuid": "c392cbe0-4572-47fc-89d8-e513b466fcee", 00:11:04.128 "assigned_rate_limits": { 00:11:04.128 "rw_ios_per_sec": 0, 00:11:04.128 "rw_mbytes_per_sec": 0, 00:11:04.128 "r_mbytes_per_sec": 0, 00:11:04.128 "w_mbytes_per_sec": 0 00:11:04.128 }, 00:11:04.128 "claimed": true, 00:11:04.128 "claim_type": "exclusive_write", 00:11:04.128 "zoned": false, 00:11:04.128 "supported_io_types": { 00:11:04.128 "read": true, 00:11:04.128 "write": true, 00:11:04.128 "unmap": true, 00:11:04.128 "flush": true, 00:11:04.128 "reset": true, 00:11:04.128 "nvme_admin": false, 00:11:04.128 "nvme_io": false, 00:11:04.128 "nvme_io_md": false, 00:11:04.128 "write_zeroes": true, 00:11:04.128 "zcopy": true, 00:11:04.128 "get_zone_info": false, 00:11:04.128 "zone_management": false, 00:11:04.128 "zone_append": false, 00:11:04.128 "compare": false, 00:11:04.128 "compare_and_write": false, 00:11:04.128 "abort": true, 00:11:04.128 "seek_hole": false, 00:11:04.128 "seek_data": false, 00:11:04.128 "copy": true, 00:11:04.128 "nvme_iov_md": false 00:11:04.128 }, 00:11:04.128 "memory_domains": [ 00:11:04.128 { 00:11:04.128 "dma_device_id": "system", 00:11:04.128 "dma_device_type": 1 00:11:04.128 }, 00:11:04.128 { 00:11:04.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.128 "dma_device_type": 2 00:11:04.128 } 00:11:04.128 ], 00:11:04.128 "driver_specific": {} 00:11:04.128 } 00:11:04.128 ] 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.128 "name": "Existed_Raid", 00:11:04.128 "uuid": "38685665-e500-40a1-8654-639f50053495", 00:11:04.128 "strip_size_kb": 64, 00:11:04.128 "state": "online", 00:11:04.128 "raid_level": "raid0", 00:11:04.128 "superblock": true, 00:11:04.128 "num_base_bdevs": 4, 00:11:04.128 "num_base_bdevs_discovered": 4, 00:11:04.128 "num_base_bdevs_operational": 4, 00:11:04.128 "base_bdevs_list": [ 00:11:04.128 { 00:11:04.128 "name": "NewBaseBdev", 00:11:04.128 "uuid": "c392cbe0-4572-47fc-89d8-e513b466fcee", 00:11:04.128 "is_configured": true, 00:11:04.128 "data_offset": 2048, 00:11:04.128 "data_size": 63488 00:11:04.128 }, 00:11:04.128 { 00:11:04.128 "name": "BaseBdev2", 00:11:04.128 "uuid": "1c0bdefa-1559-4fd0-8b95-e9e17165e4f0", 00:11:04.128 "is_configured": true, 00:11:04.128 "data_offset": 2048, 00:11:04.128 "data_size": 63488 00:11:04.128 }, 00:11:04.128 { 00:11:04.128 "name": "BaseBdev3", 00:11:04.128 "uuid": "13cc1cbe-9a24-49a6-ae09-fd3570688e63", 00:11:04.128 "is_configured": true, 00:11:04.128 "data_offset": 2048, 00:11:04.128 "data_size": 63488 00:11:04.128 }, 00:11:04.128 { 00:11:04.128 "name": "BaseBdev4", 00:11:04.128 "uuid": "b2b11f51-5e9d-43a0-b6f4-7d9629b1cf28", 00:11:04.128 "is_configured": true, 00:11:04.128 "data_offset": 2048, 00:11:04.128 "data_size": 63488 00:11:04.128 } 00:11:04.128 ] 00:11:04.128 }' 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.128 15:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.388 15:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:04.389 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:04.389 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:04.389 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:04.389 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:04.389 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:04.389 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:04.389 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:04.389 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.389 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.389 [2024-11-25 15:38:03.011944] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.389 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.389 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:04.389 "name": "Existed_Raid", 00:11:04.389 "aliases": [ 00:11:04.389 "38685665-e500-40a1-8654-639f50053495" 00:11:04.389 ], 00:11:04.389 "product_name": "Raid Volume", 00:11:04.389 "block_size": 512, 00:11:04.389 "num_blocks": 253952, 00:11:04.389 "uuid": "38685665-e500-40a1-8654-639f50053495", 00:11:04.389 "assigned_rate_limits": { 00:11:04.389 "rw_ios_per_sec": 0, 00:11:04.389 "rw_mbytes_per_sec": 0, 00:11:04.389 "r_mbytes_per_sec": 0, 00:11:04.389 "w_mbytes_per_sec": 0 00:11:04.389 }, 00:11:04.389 "claimed": false, 00:11:04.389 "zoned": false, 00:11:04.389 "supported_io_types": { 00:11:04.389 "read": true, 00:11:04.389 "write": true, 00:11:04.389 "unmap": true, 00:11:04.389 "flush": true, 00:11:04.389 "reset": true, 00:11:04.389 "nvme_admin": false, 00:11:04.389 "nvme_io": false, 00:11:04.389 "nvme_io_md": false, 00:11:04.389 "write_zeroes": true, 00:11:04.389 "zcopy": false, 00:11:04.389 "get_zone_info": false, 00:11:04.389 "zone_management": false, 00:11:04.389 "zone_append": false, 00:11:04.389 "compare": false, 00:11:04.389 "compare_and_write": false, 00:11:04.389 "abort": false, 00:11:04.389 "seek_hole": false, 00:11:04.389 "seek_data": false, 00:11:04.389 "copy": false, 00:11:04.389 "nvme_iov_md": false 00:11:04.389 }, 00:11:04.389 "memory_domains": [ 00:11:04.389 { 00:11:04.389 "dma_device_id": "system", 00:11:04.389 "dma_device_type": 1 00:11:04.389 }, 00:11:04.389 { 00:11:04.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.389 "dma_device_type": 2 00:11:04.389 }, 00:11:04.389 { 00:11:04.389 "dma_device_id": "system", 00:11:04.389 "dma_device_type": 1 00:11:04.389 }, 00:11:04.389 { 00:11:04.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.389 "dma_device_type": 2 00:11:04.389 }, 00:11:04.389 { 00:11:04.389 "dma_device_id": "system", 00:11:04.389 "dma_device_type": 1 00:11:04.389 }, 00:11:04.389 { 00:11:04.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.389 "dma_device_type": 2 00:11:04.389 }, 00:11:04.389 { 00:11:04.389 "dma_device_id": "system", 00:11:04.389 "dma_device_type": 1 00:11:04.389 }, 00:11:04.389 { 00:11:04.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.389 "dma_device_type": 2 00:11:04.389 } 00:11:04.389 ], 00:11:04.389 "driver_specific": { 00:11:04.389 "raid": { 00:11:04.389 "uuid": "38685665-e500-40a1-8654-639f50053495", 00:11:04.389 "strip_size_kb": 64, 00:11:04.389 "state": "online", 00:11:04.389 "raid_level": "raid0", 00:11:04.389 "superblock": true, 00:11:04.389 "num_base_bdevs": 4, 00:11:04.389 "num_base_bdevs_discovered": 4, 00:11:04.389 "num_base_bdevs_operational": 4, 00:11:04.389 "base_bdevs_list": [ 00:11:04.389 { 00:11:04.389 "name": "NewBaseBdev", 00:11:04.389 "uuid": "c392cbe0-4572-47fc-89d8-e513b466fcee", 00:11:04.389 "is_configured": true, 00:11:04.389 "data_offset": 2048, 00:11:04.389 "data_size": 63488 00:11:04.389 }, 00:11:04.389 { 00:11:04.389 "name": "BaseBdev2", 00:11:04.389 "uuid": "1c0bdefa-1559-4fd0-8b95-e9e17165e4f0", 00:11:04.389 "is_configured": true, 00:11:04.389 "data_offset": 2048, 00:11:04.389 "data_size": 63488 00:11:04.389 }, 00:11:04.389 { 00:11:04.389 "name": "BaseBdev3", 00:11:04.389 "uuid": "13cc1cbe-9a24-49a6-ae09-fd3570688e63", 00:11:04.389 "is_configured": true, 00:11:04.389 "data_offset": 2048, 00:11:04.389 "data_size": 63488 00:11:04.389 }, 00:11:04.389 { 00:11:04.389 "name": "BaseBdev4", 00:11:04.389 "uuid": "b2b11f51-5e9d-43a0-b6f4-7d9629b1cf28", 00:11:04.389 "is_configured": true, 00:11:04.389 "data_offset": 2048, 00:11:04.389 "data_size": 63488 00:11:04.389 } 00:11:04.389 ] 00:11:04.389 } 00:11:04.389 } 00:11:04.389 }' 00:11:04.389 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:04.648 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:04.648 BaseBdev2 00:11:04.648 BaseBdev3 00:11:04.648 BaseBdev4' 00:11:04.648 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.648 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:04.648 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.648 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:04.648 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.648 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.648 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.648 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.649 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.908 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.908 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.908 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:04.908 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.908 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.908 [2024-11-25 15:38:03.343095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:04.908 [2024-11-25 15:38:03.343124] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.908 [2024-11-25 15:38:03.343198] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.908 [2024-11-25 15:38:03.343266] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.908 [2024-11-25 15:38:03.343277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:04.908 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.908 15:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69805 00:11:04.908 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69805 ']' 00:11:04.908 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69805 00:11:04.908 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:04.908 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.908 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69805 00:11:04.908 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:04.908 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:04.908 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69805' 00:11:04.908 killing process with pid 69805 00:11:04.908 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69805 00:11:04.908 [2024-11-25 15:38:03.377512] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:04.908 15:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69805 00:11:05.167 [2024-11-25 15:38:03.767242] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:06.546 15:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:06.546 00:11:06.546 real 0m11.122s 00:11:06.546 user 0m17.671s 00:11:06.546 sys 0m1.924s 00:11:06.546 15:38:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.546 ************************************ 00:11:06.546 END TEST raid_state_function_test_sb 00:11:06.546 ************************************ 00:11:06.546 15:38:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.546 15:38:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:06.546 15:38:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:06.546 15:38:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.546 15:38:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:06.546 ************************************ 00:11:06.546 START TEST raid_superblock_test 00:11:06.546 ************************************ 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:06.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70475 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70475 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70475 ']' 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.546 15:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.546 [2024-11-25 15:38:04.997122] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:11:06.546 [2024-11-25 15:38:04.997249] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70475 ] 00:11:06.546 [2024-11-25 15:38:05.168936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.805 [2024-11-25 15:38:05.270900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.805 [2024-11-25 15:38:05.470310] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.805 [2024-11-25 15:38:05.470445] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.374 malloc1 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.374 [2024-11-25 15:38:05.860818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:07.374 [2024-11-25 15:38:05.860920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.374 [2024-11-25 15:38:05.860979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:07.374 [2024-11-25 15:38:05.861017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.374 [2024-11-25 15:38:05.863068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.374 [2024-11-25 15:38:05.863136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:07.374 pt1 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.374 malloc2 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.374 [2024-11-25 15:38:05.912529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:07.374 [2024-11-25 15:38:05.912625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.374 [2024-11-25 15:38:05.912673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:07.374 [2024-11-25 15:38:05.912705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.374 [2024-11-25 15:38:05.914781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.374 [2024-11-25 15:38:05.914867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:07.374 pt2 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.374 malloc3 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.374 15:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.374 [2024-11-25 15:38:06.000874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:07.374 [2024-11-25 15:38:06.000923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.374 [2024-11-25 15:38:06.000944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:07.374 [2024-11-25 15:38:06.000952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.374 [2024-11-25 15:38:06.002981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.374 [2024-11-25 15:38:06.003020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:07.374 pt3 00:11:07.374 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.374 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:07.374 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:07.374 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:07.374 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:07.374 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:07.374 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:07.374 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:07.374 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:07.374 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:07.374 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.374 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.374 malloc4 00:11:07.374 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.374 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:07.374 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.374 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.374 [2024-11-25 15:38:06.050448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:07.374 [2024-11-25 15:38:06.050545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.374 [2024-11-25 15:38:06.050585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:07.374 [2024-11-25 15:38:06.050629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.374 [2024-11-25 15:38:06.052677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.374 [2024-11-25 15:38:06.052758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:07.635 pt4 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.635 [2024-11-25 15:38:06.062463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:07.635 [2024-11-25 15:38:06.064223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:07.635 [2024-11-25 15:38:06.064337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:07.635 [2024-11-25 15:38:06.064433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:07.635 [2024-11-25 15:38:06.064646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:07.635 [2024-11-25 15:38:06.064692] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:07.635 [2024-11-25 15:38:06.064954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:07.635 [2024-11-25 15:38:06.065161] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:07.635 [2024-11-25 15:38:06.065208] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:07.635 [2024-11-25 15:38:06.065394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.635 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.635 "name": "raid_bdev1", 00:11:07.635 "uuid": "ab1781ac-b45a-4de8-9cf8-395e0a611070", 00:11:07.635 "strip_size_kb": 64, 00:11:07.635 "state": "online", 00:11:07.635 "raid_level": "raid0", 00:11:07.635 "superblock": true, 00:11:07.635 "num_base_bdevs": 4, 00:11:07.635 "num_base_bdevs_discovered": 4, 00:11:07.635 "num_base_bdevs_operational": 4, 00:11:07.635 "base_bdevs_list": [ 00:11:07.635 { 00:11:07.635 "name": "pt1", 00:11:07.635 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.635 "is_configured": true, 00:11:07.635 "data_offset": 2048, 00:11:07.635 "data_size": 63488 00:11:07.635 }, 00:11:07.635 { 00:11:07.635 "name": "pt2", 00:11:07.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.635 "is_configured": true, 00:11:07.635 "data_offset": 2048, 00:11:07.636 "data_size": 63488 00:11:07.636 }, 00:11:07.636 { 00:11:07.636 "name": "pt3", 00:11:07.636 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.636 "is_configured": true, 00:11:07.636 "data_offset": 2048, 00:11:07.636 "data_size": 63488 00:11:07.636 }, 00:11:07.636 { 00:11:07.636 "name": "pt4", 00:11:07.636 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.636 "is_configured": true, 00:11:07.636 "data_offset": 2048, 00:11:07.636 "data_size": 63488 00:11:07.636 } 00:11:07.636 ] 00:11:07.636 }' 00:11:07.636 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.636 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.896 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:07.896 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:07.896 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.896 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.896 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.896 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.896 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:07.896 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.896 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.896 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.896 [2024-11-25 15:38:06.521985] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.896 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.896 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.896 "name": "raid_bdev1", 00:11:07.896 "aliases": [ 00:11:07.896 "ab1781ac-b45a-4de8-9cf8-395e0a611070" 00:11:07.896 ], 00:11:07.896 "product_name": "Raid Volume", 00:11:07.896 "block_size": 512, 00:11:07.896 "num_blocks": 253952, 00:11:07.896 "uuid": "ab1781ac-b45a-4de8-9cf8-395e0a611070", 00:11:07.896 "assigned_rate_limits": { 00:11:07.896 "rw_ios_per_sec": 0, 00:11:07.896 "rw_mbytes_per_sec": 0, 00:11:07.896 "r_mbytes_per_sec": 0, 00:11:07.896 "w_mbytes_per_sec": 0 00:11:07.896 }, 00:11:07.896 "claimed": false, 00:11:07.896 "zoned": false, 00:11:07.896 "supported_io_types": { 00:11:07.896 "read": true, 00:11:07.896 "write": true, 00:11:07.896 "unmap": true, 00:11:07.896 "flush": true, 00:11:07.896 "reset": true, 00:11:07.896 "nvme_admin": false, 00:11:07.896 "nvme_io": false, 00:11:07.896 "nvme_io_md": false, 00:11:07.896 "write_zeroes": true, 00:11:07.896 "zcopy": false, 00:11:07.896 "get_zone_info": false, 00:11:07.896 "zone_management": false, 00:11:07.896 "zone_append": false, 00:11:07.896 "compare": false, 00:11:07.896 "compare_and_write": false, 00:11:07.896 "abort": false, 00:11:07.896 "seek_hole": false, 00:11:07.896 "seek_data": false, 00:11:07.896 "copy": false, 00:11:07.896 "nvme_iov_md": false 00:11:07.896 }, 00:11:07.896 "memory_domains": [ 00:11:07.896 { 00:11:07.896 "dma_device_id": "system", 00:11:07.896 "dma_device_type": 1 00:11:07.896 }, 00:11:07.896 { 00:11:07.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.896 "dma_device_type": 2 00:11:07.896 }, 00:11:07.896 { 00:11:07.896 "dma_device_id": "system", 00:11:07.896 "dma_device_type": 1 00:11:07.896 }, 00:11:07.896 { 00:11:07.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.896 "dma_device_type": 2 00:11:07.896 }, 00:11:07.896 { 00:11:07.896 "dma_device_id": "system", 00:11:07.896 "dma_device_type": 1 00:11:07.896 }, 00:11:07.896 { 00:11:07.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.896 "dma_device_type": 2 00:11:07.896 }, 00:11:07.896 { 00:11:07.896 "dma_device_id": "system", 00:11:07.896 "dma_device_type": 1 00:11:07.896 }, 00:11:07.896 { 00:11:07.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.897 "dma_device_type": 2 00:11:07.897 } 00:11:07.897 ], 00:11:07.897 "driver_specific": { 00:11:07.897 "raid": { 00:11:07.897 "uuid": "ab1781ac-b45a-4de8-9cf8-395e0a611070", 00:11:07.897 "strip_size_kb": 64, 00:11:07.897 "state": "online", 00:11:07.897 "raid_level": "raid0", 00:11:07.897 "superblock": true, 00:11:07.897 "num_base_bdevs": 4, 00:11:07.897 "num_base_bdevs_discovered": 4, 00:11:07.897 "num_base_bdevs_operational": 4, 00:11:07.897 "base_bdevs_list": [ 00:11:07.897 { 00:11:07.897 "name": "pt1", 00:11:07.897 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.897 "is_configured": true, 00:11:07.897 "data_offset": 2048, 00:11:07.897 "data_size": 63488 00:11:07.897 }, 00:11:07.897 { 00:11:07.897 "name": "pt2", 00:11:07.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.897 "is_configured": true, 00:11:07.897 "data_offset": 2048, 00:11:07.897 "data_size": 63488 00:11:07.897 }, 00:11:07.897 { 00:11:07.897 "name": "pt3", 00:11:07.897 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.897 "is_configured": true, 00:11:07.897 "data_offset": 2048, 00:11:07.897 "data_size": 63488 00:11:07.897 }, 00:11:07.897 { 00:11:07.897 "name": "pt4", 00:11:07.897 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.897 "is_configured": true, 00:11:07.897 "data_offset": 2048, 00:11:07.897 "data_size": 63488 00:11:07.897 } 00:11:07.897 ] 00:11:07.897 } 00:11:07.897 } 00:11:07.897 }' 00:11:07.897 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:08.156 pt2 00:11:08.156 pt3 00:11:08.156 pt4' 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:08.156 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.157 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.157 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.157 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.157 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.157 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.157 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.157 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:08.157 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.157 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.157 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.157 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.416 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.416 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.416 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:08.416 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:08.416 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.416 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.416 [2024-11-25 15:38:06.849416] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.416 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.416 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ab1781ac-b45a-4de8-9cf8-395e0a611070 00:11:08.416 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ab1781ac-b45a-4de8-9cf8-395e0a611070 ']' 00:11:08.416 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:08.416 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.416 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.417 [2024-11-25 15:38:06.897017] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:08.417 [2024-11-25 15:38:06.897079] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.417 [2024-11-25 15:38:06.897182] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.417 [2024-11-25 15:38:06.897269] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.417 [2024-11-25 15:38:06.897316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.417 15:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.417 [2024-11-25 15:38:07.064726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:08.417 [2024-11-25 15:38:07.066562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:08.417 [2024-11-25 15:38:07.066620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:08.417 [2024-11-25 15:38:07.066653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:08.417 [2024-11-25 15:38:07.066703] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:08.417 [2024-11-25 15:38:07.066747] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:08.417 [2024-11-25 15:38:07.066766] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:08.417 [2024-11-25 15:38:07.066783] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:08.417 [2024-11-25 15:38:07.066796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:08.417 [2024-11-25 15:38:07.066809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:08.417 request: 00:11:08.417 { 00:11:08.417 "name": "raid_bdev1", 00:11:08.417 "raid_level": "raid0", 00:11:08.417 "base_bdevs": [ 00:11:08.417 "malloc1", 00:11:08.417 "malloc2", 00:11:08.417 "malloc3", 00:11:08.417 "malloc4" 00:11:08.417 ], 00:11:08.417 "strip_size_kb": 64, 00:11:08.417 "superblock": false, 00:11:08.417 "method": "bdev_raid_create", 00:11:08.417 "req_id": 1 00:11:08.417 } 00:11:08.417 Got JSON-RPC error response 00:11:08.417 response: 00:11:08.417 { 00:11:08.417 "code": -17, 00:11:08.417 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:08.417 } 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.417 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.677 [2024-11-25 15:38:07.132625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:08.677 [2024-11-25 15:38:07.132762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.677 [2024-11-25 15:38:07.132799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:08.677 [2024-11-25 15:38:07.132831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.677 [2024-11-25 15:38:07.135022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.677 [2024-11-25 15:38:07.135099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:08.677 [2024-11-25 15:38:07.135209] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:08.677 [2024-11-25 15:38:07.135301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:08.677 pt1 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.677 "name": "raid_bdev1", 00:11:08.677 "uuid": "ab1781ac-b45a-4de8-9cf8-395e0a611070", 00:11:08.677 "strip_size_kb": 64, 00:11:08.677 "state": "configuring", 00:11:08.677 "raid_level": "raid0", 00:11:08.677 "superblock": true, 00:11:08.677 "num_base_bdevs": 4, 00:11:08.677 "num_base_bdevs_discovered": 1, 00:11:08.677 "num_base_bdevs_operational": 4, 00:11:08.677 "base_bdevs_list": [ 00:11:08.677 { 00:11:08.677 "name": "pt1", 00:11:08.677 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.677 "is_configured": true, 00:11:08.677 "data_offset": 2048, 00:11:08.677 "data_size": 63488 00:11:08.677 }, 00:11:08.677 { 00:11:08.677 "name": null, 00:11:08.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.677 "is_configured": false, 00:11:08.677 "data_offset": 2048, 00:11:08.677 "data_size": 63488 00:11:08.677 }, 00:11:08.677 { 00:11:08.677 "name": null, 00:11:08.677 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:08.677 "is_configured": false, 00:11:08.677 "data_offset": 2048, 00:11:08.677 "data_size": 63488 00:11:08.677 }, 00:11:08.677 { 00:11:08.677 "name": null, 00:11:08.677 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:08.677 "is_configured": false, 00:11:08.677 "data_offset": 2048, 00:11:08.677 "data_size": 63488 00:11:08.677 } 00:11:08.677 ] 00:11:08.677 }' 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.677 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.936 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:08.936 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:08.936 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.936 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.936 [2024-11-25 15:38:07.595849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:08.936 [2024-11-25 15:38:07.595985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.936 [2024-11-25 15:38:07.596054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:08.936 [2024-11-25 15:38:07.596090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.936 [2024-11-25 15:38:07.596553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.936 [2024-11-25 15:38:07.596616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:08.936 [2024-11-25 15:38:07.596729] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:08.936 [2024-11-25 15:38:07.596786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:08.936 pt2 00:11:08.936 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.936 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:08.936 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.936 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.936 [2024-11-25 15:38:07.607820] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:08.936 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.936 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:08.936 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.936 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.936 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.936 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.936 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.936 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.936 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.936 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.936 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.196 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.196 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.196 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.196 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.196 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.196 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.196 "name": "raid_bdev1", 00:11:09.196 "uuid": "ab1781ac-b45a-4de8-9cf8-395e0a611070", 00:11:09.196 "strip_size_kb": 64, 00:11:09.196 "state": "configuring", 00:11:09.196 "raid_level": "raid0", 00:11:09.196 "superblock": true, 00:11:09.196 "num_base_bdevs": 4, 00:11:09.196 "num_base_bdevs_discovered": 1, 00:11:09.196 "num_base_bdevs_operational": 4, 00:11:09.196 "base_bdevs_list": [ 00:11:09.196 { 00:11:09.196 "name": "pt1", 00:11:09.196 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.196 "is_configured": true, 00:11:09.196 "data_offset": 2048, 00:11:09.196 "data_size": 63488 00:11:09.196 }, 00:11:09.196 { 00:11:09.196 "name": null, 00:11:09.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.196 "is_configured": false, 00:11:09.196 "data_offset": 0, 00:11:09.196 "data_size": 63488 00:11:09.196 }, 00:11:09.196 { 00:11:09.196 "name": null, 00:11:09.196 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.196 "is_configured": false, 00:11:09.196 "data_offset": 2048, 00:11:09.196 "data_size": 63488 00:11:09.196 }, 00:11:09.196 { 00:11:09.196 "name": null, 00:11:09.196 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:09.196 "is_configured": false, 00:11:09.196 "data_offset": 2048, 00:11:09.196 "data_size": 63488 00:11:09.196 } 00:11:09.196 ] 00:11:09.196 }' 00:11:09.196 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.196 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.456 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:09.456 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:09.456 15:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:09.456 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.456 15:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.456 [2024-11-25 15:38:07.999164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:09.456 [2024-11-25 15:38:07.999230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.456 [2024-11-25 15:38:07.999250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:09.456 [2024-11-25 15:38:07.999260] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.456 [2024-11-25 15:38:07.999706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.456 [2024-11-25 15:38:07.999736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:09.456 [2024-11-25 15:38:07.999827] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:09.456 [2024-11-25 15:38:07.999854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:09.456 pt2 00:11:09.456 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.456 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:09.456 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:09.456 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:09.456 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.456 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.456 [2024-11-25 15:38:08.011117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:09.456 [2024-11-25 15:38:08.011166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.456 [2024-11-25 15:38:08.011190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:09.456 [2024-11-25 15:38:08.011200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.456 [2024-11-25 15:38:08.011575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.456 [2024-11-25 15:38:08.011604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:09.456 [2024-11-25 15:38:08.011670] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:09.456 [2024-11-25 15:38:08.011689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:09.456 pt3 00:11:09.456 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.456 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:09.456 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:09.456 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:09.456 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.456 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.456 [2024-11-25 15:38:08.023081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:09.456 [2024-11-25 15:38:08.023135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.456 [2024-11-25 15:38:08.023155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:09.456 [2024-11-25 15:38:08.023163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.456 [2024-11-25 15:38:08.023553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.456 [2024-11-25 15:38:08.023580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:09.456 [2024-11-25 15:38:08.023654] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:09.456 [2024-11-25 15:38:08.023689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:09.456 [2024-11-25 15:38:08.023829] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:09.456 [2024-11-25 15:38:08.023842] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:09.456 [2024-11-25 15:38:08.024090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:09.456 [2024-11-25 15:38:08.024239] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:09.456 [2024-11-25 15:38:08.024255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:09.456 [2024-11-25 15:38:08.024383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.456 pt4 00:11:09.456 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.456 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:09.456 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:09.456 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:09.456 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.457 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.457 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.457 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.457 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.457 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.457 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.457 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.457 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.457 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.457 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.457 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.457 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.457 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.457 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.457 "name": "raid_bdev1", 00:11:09.457 "uuid": "ab1781ac-b45a-4de8-9cf8-395e0a611070", 00:11:09.457 "strip_size_kb": 64, 00:11:09.457 "state": "online", 00:11:09.457 "raid_level": "raid0", 00:11:09.457 "superblock": true, 00:11:09.457 "num_base_bdevs": 4, 00:11:09.457 "num_base_bdevs_discovered": 4, 00:11:09.457 "num_base_bdevs_operational": 4, 00:11:09.457 "base_bdevs_list": [ 00:11:09.457 { 00:11:09.457 "name": "pt1", 00:11:09.457 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.457 "is_configured": true, 00:11:09.457 "data_offset": 2048, 00:11:09.457 "data_size": 63488 00:11:09.457 }, 00:11:09.457 { 00:11:09.457 "name": "pt2", 00:11:09.457 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.457 "is_configured": true, 00:11:09.457 "data_offset": 2048, 00:11:09.457 "data_size": 63488 00:11:09.457 }, 00:11:09.457 { 00:11:09.457 "name": "pt3", 00:11:09.457 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.457 "is_configured": true, 00:11:09.457 "data_offset": 2048, 00:11:09.457 "data_size": 63488 00:11:09.457 }, 00:11:09.457 { 00:11:09.457 "name": "pt4", 00:11:09.457 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:09.457 "is_configured": true, 00:11:09.457 "data_offset": 2048, 00:11:09.457 "data_size": 63488 00:11:09.457 } 00:11:09.457 ] 00:11:09.457 }' 00:11:09.457 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.457 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.025 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:10.025 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:10.025 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:10.025 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:10.025 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:10.025 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:10.025 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:10.025 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:10.025 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.025 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.025 [2024-11-25 15:38:08.454735] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.025 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.025 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:10.025 "name": "raid_bdev1", 00:11:10.025 "aliases": [ 00:11:10.025 "ab1781ac-b45a-4de8-9cf8-395e0a611070" 00:11:10.025 ], 00:11:10.025 "product_name": "Raid Volume", 00:11:10.025 "block_size": 512, 00:11:10.025 "num_blocks": 253952, 00:11:10.025 "uuid": "ab1781ac-b45a-4de8-9cf8-395e0a611070", 00:11:10.025 "assigned_rate_limits": { 00:11:10.025 "rw_ios_per_sec": 0, 00:11:10.025 "rw_mbytes_per_sec": 0, 00:11:10.025 "r_mbytes_per_sec": 0, 00:11:10.025 "w_mbytes_per_sec": 0 00:11:10.025 }, 00:11:10.025 "claimed": false, 00:11:10.025 "zoned": false, 00:11:10.025 "supported_io_types": { 00:11:10.025 "read": true, 00:11:10.025 "write": true, 00:11:10.025 "unmap": true, 00:11:10.025 "flush": true, 00:11:10.025 "reset": true, 00:11:10.025 "nvme_admin": false, 00:11:10.025 "nvme_io": false, 00:11:10.025 "nvme_io_md": false, 00:11:10.025 "write_zeroes": true, 00:11:10.025 "zcopy": false, 00:11:10.025 "get_zone_info": false, 00:11:10.025 "zone_management": false, 00:11:10.025 "zone_append": false, 00:11:10.025 "compare": false, 00:11:10.025 "compare_and_write": false, 00:11:10.025 "abort": false, 00:11:10.025 "seek_hole": false, 00:11:10.025 "seek_data": false, 00:11:10.025 "copy": false, 00:11:10.025 "nvme_iov_md": false 00:11:10.025 }, 00:11:10.025 "memory_domains": [ 00:11:10.025 { 00:11:10.025 "dma_device_id": "system", 00:11:10.025 "dma_device_type": 1 00:11:10.025 }, 00:11:10.025 { 00:11:10.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.025 "dma_device_type": 2 00:11:10.025 }, 00:11:10.025 { 00:11:10.025 "dma_device_id": "system", 00:11:10.025 "dma_device_type": 1 00:11:10.025 }, 00:11:10.025 { 00:11:10.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.025 "dma_device_type": 2 00:11:10.025 }, 00:11:10.025 { 00:11:10.025 "dma_device_id": "system", 00:11:10.025 "dma_device_type": 1 00:11:10.025 }, 00:11:10.025 { 00:11:10.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.025 "dma_device_type": 2 00:11:10.025 }, 00:11:10.025 { 00:11:10.025 "dma_device_id": "system", 00:11:10.025 "dma_device_type": 1 00:11:10.025 }, 00:11:10.025 { 00:11:10.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.025 "dma_device_type": 2 00:11:10.025 } 00:11:10.025 ], 00:11:10.025 "driver_specific": { 00:11:10.025 "raid": { 00:11:10.025 "uuid": "ab1781ac-b45a-4de8-9cf8-395e0a611070", 00:11:10.025 "strip_size_kb": 64, 00:11:10.025 "state": "online", 00:11:10.025 "raid_level": "raid0", 00:11:10.025 "superblock": true, 00:11:10.025 "num_base_bdevs": 4, 00:11:10.025 "num_base_bdevs_discovered": 4, 00:11:10.025 "num_base_bdevs_operational": 4, 00:11:10.025 "base_bdevs_list": [ 00:11:10.025 { 00:11:10.025 "name": "pt1", 00:11:10.025 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:10.025 "is_configured": true, 00:11:10.025 "data_offset": 2048, 00:11:10.025 "data_size": 63488 00:11:10.025 }, 00:11:10.025 { 00:11:10.025 "name": "pt2", 00:11:10.025 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.025 "is_configured": true, 00:11:10.025 "data_offset": 2048, 00:11:10.025 "data_size": 63488 00:11:10.025 }, 00:11:10.025 { 00:11:10.025 "name": "pt3", 00:11:10.025 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:10.025 "is_configured": true, 00:11:10.025 "data_offset": 2048, 00:11:10.025 "data_size": 63488 00:11:10.025 }, 00:11:10.025 { 00:11:10.025 "name": "pt4", 00:11:10.025 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:10.025 "is_configured": true, 00:11:10.025 "data_offset": 2048, 00:11:10.025 "data_size": 63488 00:11:10.025 } 00:11:10.025 ] 00:11:10.025 } 00:11:10.025 } 00:11:10.025 }' 00:11:10.025 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:10.025 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:10.025 pt2 00:11:10.025 pt3 00:11:10.025 pt4' 00:11:10.025 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.025 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:10.025 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.025 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:10.025 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.026 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.026 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.026 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.026 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.026 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.026 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.026 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:10.026 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.026 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.026 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.026 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.026 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.026 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.026 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.026 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:10.026 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.026 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.026 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.285 [2024-11-25 15:38:08.794061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ab1781ac-b45a-4de8-9cf8-395e0a611070 '!=' ab1781ac-b45a-4de8-9cf8-395e0a611070 ']' 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70475 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70475 ']' 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70475 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70475 00:11:10.285 killing process with pid 70475 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70475' 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70475 00:11:10.285 [2024-11-25 15:38:08.880022] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.285 [2024-11-25 15:38:08.880114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.285 [2024-11-25 15:38:08.880187] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.285 [2024-11-25 15:38:08.880197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:10.285 15:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70475 00:11:10.851 [2024-11-25 15:38:09.271892] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.795 15:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:11.795 00:11:11.795 real 0m5.422s 00:11:11.795 user 0m7.816s 00:11:11.795 sys 0m0.910s 00:11:11.795 15:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.795 15:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.795 ************************************ 00:11:11.795 END TEST raid_superblock_test 00:11:11.795 ************************************ 00:11:11.795 15:38:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:11.795 15:38:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:11.795 15:38:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.795 15:38:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.795 ************************************ 00:11:11.795 START TEST raid_read_error_test 00:11:11.795 ************************************ 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MFNRQUoBGU 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70734 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70734 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70734 ']' 00:11:11.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.795 15:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.054 [2024-11-25 15:38:10.505304] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:11:12.054 [2024-11-25 15:38:10.505507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70734 ] 00:11:12.054 [2024-11-25 15:38:10.679769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.313 [2024-11-25 15:38:10.790427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.313 [2024-11-25 15:38:10.982903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.313 [2024-11-25 15:38:10.983033] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.883 BaseBdev1_malloc 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.883 true 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.883 [2024-11-25 15:38:11.390242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:12.883 [2024-11-25 15:38:11.390293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.883 [2024-11-25 15:38:11.390328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:12.883 [2024-11-25 15:38:11.390338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.883 [2024-11-25 15:38:11.392347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.883 [2024-11-25 15:38:11.392385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:12.883 BaseBdev1 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.883 BaseBdev2_malloc 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.883 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.883 true 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.884 [2024-11-25 15:38:11.454483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:12.884 [2024-11-25 15:38:11.454532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.884 [2024-11-25 15:38:11.454547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:12.884 [2024-11-25 15:38:11.454557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.884 [2024-11-25 15:38:11.456543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.884 [2024-11-25 15:38:11.456582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:12.884 BaseBdev2 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.884 BaseBdev3_malloc 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.884 true 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.884 [2024-11-25 15:38:11.532805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:12.884 [2024-11-25 15:38:11.532905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.884 [2024-11-25 15:38:11.532956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:12.884 [2024-11-25 15:38:11.532986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.884 [2024-11-25 15:38:11.535070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.884 [2024-11-25 15:38:11.535144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:12.884 BaseBdev3 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.884 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.144 BaseBdev4_malloc 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.144 true 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.144 [2024-11-25 15:38:11.597483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:13.144 [2024-11-25 15:38:11.597534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.144 [2024-11-25 15:38:11.597567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:13.144 [2024-11-25 15:38:11.597577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.144 [2024-11-25 15:38:11.599634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.144 [2024-11-25 15:38:11.599717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:13.144 BaseBdev4 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.144 [2024-11-25 15:38:11.609519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.144 [2024-11-25 15:38:11.611288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:13.144 [2024-11-25 15:38:11.611362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:13.144 [2024-11-25 15:38:11.611424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:13.144 [2024-11-25 15:38:11.611637] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:13.144 [2024-11-25 15:38:11.611652] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:13.144 [2024-11-25 15:38:11.611890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:13.144 [2024-11-25 15:38:11.612043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:13.144 [2024-11-25 15:38:11.612055] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:13.144 [2024-11-25 15:38:11.612194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.144 "name": "raid_bdev1", 00:11:13.144 "uuid": "e6e9e72b-1ba6-41dc-b999-4a57e890a1a3", 00:11:13.144 "strip_size_kb": 64, 00:11:13.144 "state": "online", 00:11:13.144 "raid_level": "raid0", 00:11:13.144 "superblock": true, 00:11:13.144 "num_base_bdevs": 4, 00:11:13.144 "num_base_bdevs_discovered": 4, 00:11:13.144 "num_base_bdevs_operational": 4, 00:11:13.144 "base_bdevs_list": [ 00:11:13.144 { 00:11:13.144 "name": "BaseBdev1", 00:11:13.144 "uuid": "f26c7aad-43c3-535b-b14f-2d3e931c35b5", 00:11:13.144 "is_configured": true, 00:11:13.144 "data_offset": 2048, 00:11:13.144 "data_size": 63488 00:11:13.144 }, 00:11:13.144 { 00:11:13.144 "name": "BaseBdev2", 00:11:13.144 "uuid": "976d86b9-0cff-5d75-bb63-1d7e347414c9", 00:11:13.144 "is_configured": true, 00:11:13.144 "data_offset": 2048, 00:11:13.144 "data_size": 63488 00:11:13.144 }, 00:11:13.144 { 00:11:13.144 "name": "BaseBdev3", 00:11:13.144 "uuid": "4490211f-b74a-5ebf-b919-05ba0350fb95", 00:11:13.144 "is_configured": true, 00:11:13.144 "data_offset": 2048, 00:11:13.144 "data_size": 63488 00:11:13.144 }, 00:11:13.144 { 00:11:13.144 "name": "BaseBdev4", 00:11:13.144 "uuid": "077e70d6-963a-5a65-8841-1bc0b73cb14a", 00:11:13.144 "is_configured": true, 00:11:13.144 "data_offset": 2048, 00:11:13.144 "data_size": 63488 00:11:13.144 } 00:11:13.144 ] 00:11:13.144 }' 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.144 15:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.405 15:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:13.405 15:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:13.665 [2024-11-25 15:38:12.138000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:14.605 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:14.605 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.605 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.605 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.605 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:14.605 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:14.605 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:14.605 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:14.605 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.605 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.605 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:14.605 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.605 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.605 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.605 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.605 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.605 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.605 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.606 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.606 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.606 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.606 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.606 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.606 "name": "raid_bdev1", 00:11:14.606 "uuid": "e6e9e72b-1ba6-41dc-b999-4a57e890a1a3", 00:11:14.606 "strip_size_kb": 64, 00:11:14.606 "state": "online", 00:11:14.606 "raid_level": "raid0", 00:11:14.606 "superblock": true, 00:11:14.606 "num_base_bdevs": 4, 00:11:14.606 "num_base_bdevs_discovered": 4, 00:11:14.606 "num_base_bdevs_operational": 4, 00:11:14.606 "base_bdevs_list": [ 00:11:14.606 { 00:11:14.606 "name": "BaseBdev1", 00:11:14.606 "uuid": "f26c7aad-43c3-535b-b14f-2d3e931c35b5", 00:11:14.606 "is_configured": true, 00:11:14.606 "data_offset": 2048, 00:11:14.606 "data_size": 63488 00:11:14.606 }, 00:11:14.606 { 00:11:14.606 "name": "BaseBdev2", 00:11:14.606 "uuid": "976d86b9-0cff-5d75-bb63-1d7e347414c9", 00:11:14.606 "is_configured": true, 00:11:14.606 "data_offset": 2048, 00:11:14.606 "data_size": 63488 00:11:14.606 }, 00:11:14.606 { 00:11:14.606 "name": "BaseBdev3", 00:11:14.606 "uuid": "4490211f-b74a-5ebf-b919-05ba0350fb95", 00:11:14.606 "is_configured": true, 00:11:14.606 "data_offset": 2048, 00:11:14.606 "data_size": 63488 00:11:14.606 }, 00:11:14.606 { 00:11:14.606 "name": "BaseBdev4", 00:11:14.606 "uuid": "077e70d6-963a-5a65-8841-1bc0b73cb14a", 00:11:14.606 "is_configured": true, 00:11:14.606 "data_offset": 2048, 00:11:14.606 "data_size": 63488 00:11:14.606 } 00:11:14.606 ] 00:11:14.606 }' 00:11:14.606 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.606 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.865 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:14.865 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.865 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.865 [2024-11-25 15:38:13.461896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.865 [2024-11-25 15:38:13.461932] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.865 { 00:11:14.865 "results": [ 00:11:14.865 { 00:11:14.865 "job": "raid_bdev1", 00:11:14.865 "core_mask": "0x1", 00:11:14.865 "workload": "randrw", 00:11:14.865 "percentage": 50, 00:11:14.865 "status": "finished", 00:11:14.865 "queue_depth": 1, 00:11:14.865 "io_size": 131072, 00:11:14.865 "runtime": 1.324587, 00:11:14.865 "iops": 16157.48908905191, 00:11:14.865 "mibps": 2019.6861361314886, 00:11:14.865 "io_failed": 1, 00:11:14.865 "io_timeout": 0, 00:11:14.865 "avg_latency_us": 86.12046852183927, 00:11:14.865 "min_latency_us": 25.4882096069869, 00:11:14.865 "max_latency_us": 1438.071615720524 00:11:14.865 } 00:11:14.865 ], 00:11:14.865 "core_count": 1 00:11:14.865 } 00:11:14.865 [2024-11-25 15:38:13.464590] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.865 [2024-11-25 15:38:13.464650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.865 [2024-11-25 15:38:13.464691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.866 [2024-11-25 15:38:13.464702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:14.866 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.866 15:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70734 00:11:14.866 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70734 ']' 00:11:14.866 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70734 00:11:14.866 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:14.866 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.866 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70734 00:11:14.866 killing process with pid 70734 00:11:14.866 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.866 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.866 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70734' 00:11:14.866 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70734 00:11:14.866 [2024-11-25 15:38:13.490020] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.866 15:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70734 00:11:15.434 [2024-11-25 15:38:13.804759] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:16.374 15:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:16.374 15:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:16.374 15:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MFNRQUoBGU 00:11:16.374 15:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:16.374 15:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:16.374 15:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.374 15:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:16.374 15:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:16.374 00:11:16.374 real 0m4.547s 00:11:16.374 user 0m5.332s 00:11:16.374 sys 0m0.526s 00:11:16.374 15:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.374 15:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.374 ************************************ 00:11:16.374 END TEST raid_read_error_test 00:11:16.374 ************************************ 00:11:16.374 15:38:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:16.374 15:38:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:16.374 15:38:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.374 15:38:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:16.374 ************************************ 00:11:16.374 START TEST raid_write_error_test 00:11:16.374 ************************************ 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.r3x4guaT7k 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70874 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70874 00:11:16.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70874 ']' 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.374 15:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.635 [2024-11-25 15:38:15.127078] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:11:16.635 [2024-11-25 15:38:15.127197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70874 ] 00:11:16.635 [2024-11-25 15:38:15.299133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.894 [2024-11-25 15:38:15.413059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.154 [2024-11-25 15:38:15.612428] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.154 [2024-11-25 15:38:15.612466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.414 15:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.414 15:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:17.414 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.414 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:17.414 15:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.414 15:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.414 BaseBdev1_malloc 00:11:17.414 15:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.414 15:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:17.414 15:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.414 15:38:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.414 true 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.414 [2024-11-25 15:38:16.011116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:17.414 [2024-11-25 15:38:16.011239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.414 [2024-11-25 15:38:16.011264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:17.414 [2024-11-25 15:38:16.011275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.414 [2024-11-25 15:38:16.013363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.414 [2024-11-25 15:38:16.013399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:17.414 BaseBdev1 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.414 BaseBdev2_malloc 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.414 true 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.414 [2024-11-25 15:38:16.078427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:17.414 [2024-11-25 15:38:16.078481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.414 [2024-11-25 15:38:16.078499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:17.414 [2024-11-25 15:38:16.078509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.414 [2024-11-25 15:38:16.080565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.414 [2024-11-25 15:38:16.080656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:17.414 BaseBdev2 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.414 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.675 BaseBdev3_malloc 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.675 true 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.675 [2024-11-25 15:38:16.155758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:17.675 [2024-11-25 15:38:16.155869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.675 [2024-11-25 15:38:16.155902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:17.675 [2024-11-25 15:38:16.155931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.675 [2024-11-25 15:38:16.157930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.675 [2024-11-25 15:38:16.158004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:17.675 BaseBdev3 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.675 BaseBdev4_malloc 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.675 true 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.675 [2024-11-25 15:38:16.222452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:17.675 [2024-11-25 15:38:16.222544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.675 [2024-11-25 15:38:16.222566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:17.675 [2024-11-25 15:38:16.222577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.675 [2024-11-25 15:38:16.224663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.675 [2024-11-25 15:38:16.224717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:17.675 BaseBdev4 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.675 [2024-11-25 15:38:16.234484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.675 [2024-11-25 15:38:16.236260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.675 [2024-11-25 15:38:16.236335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.675 [2024-11-25 15:38:16.236402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:17.675 [2024-11-25 15:38:16.236614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:17.675 [2024-11-25 15:38:16.236632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:17.675 [2024-11-25 15:38:16.236874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:17.675 [2024-11-25 15:38:16.237029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:17.675 [2024-11-25 15:38:16.237041] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:17.675 [2024-11-25 15:38:16.237188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.675 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.675 "name": "raid_bdev1", 00:11:17.675 "uuid": "51f88d07-f766-425b-9714-abe0c47ac602", 00:11:17.675 "strip_size_kb": 64, 00:11:17.675 "state": "online", 00:11:17.675 "raid_level": "raid0", 00:11:17.675 "superblock": true, 00:11:17.675 "num_base_bdevs": 4, 00:11:17.675 "num_base_bdevs_discovered": 4, 00:11:17.675 "num_base_bdevs_operational": 4, 00:11:17.675 "base_bdevs_list": [ 00:11:17.675 { 00:11:17.675 "name": "BaseBdev1", 00:11:17.675 "uuid": "3e3f7f3c-9822-5c5e-89c0-e183b65342fd", 00:11:17.675 "is_configured": true, 00:11:17.675 "data_offset": 2048, 00:11:17.675 "data_size": 63488 00:11:17.675 }, 00:11:17.675 { 00:11:17.675 "name": "BaseBdev2", 00:11:17.675 "uuid": "61a67984-9183-561c-99e6-b543a308e675", 00:11:17.675 "is_configured": true, 00:11:17.675 "data_offset": 2048, 00:11:17.675 "data_size": 63488 00:11:17.675 }, 00:11:17.675 { 00:11:17.675 "name": "BaseBdev3", 00:11:17.675 "uuid": "d3875cc1-7fdc-56bb-a7a0-400b81ccfd60", 00:11:17.676 "is_configured": true, 00:11:17.676 "data_offset": 2048, 00:11:17.676 "data_size": 63488 00:11:17.676 }, 00:11:17.676 { 00:11:17.676 "name": "BaseBdev4", 00:11:17.676 "uuid": "1139e542-eae2-57ad-9fc4-fad79b93c80f", 00:11:17.676 "is_configured": true, 00:11:17.676 "data_offset": 2048, 00:11:17.676 "data_size": 63488 00:11:17.676 } 00:11:17.676 ] 00:11:17.676 }' 00:11:17.676 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.676 15:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.245 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:18.245 15:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:18.245 [2024-11-25 15:38:16.743052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.185 "name": "raid_bdev1", 00:11:19.185 "uuid": "51f88d07-f766-425b-9714-abe0c47ac602", 00:11:19.185 "strip_size_kb": 64, 00:11:19.185 "state": "online", 00:11:19.185 "raid_level": "raid0", 00:11:19.185 "superblock": true, 00:11:19.185 "num_base_bdevs": 4, 00:11:19.185 "num_base_bdevs_discovered": 4, 00:11:19.185 "num_base_bdevs_operational": 4, 00:11:19.185 "base_bdevs_list": [ 00:11:19.185 { 00:11:19.185 "name": "BaseBdev1", 00:11:19.185 "uuid": "3e3f7f3c-9822-5c5e-89c0-e183b65342fd", 00:11:19.185 "is_configured": true, 00:11:19.185 "data_offset": 2048, 00:11:19.185 "data_size": 63488 00:11:19.185 }, 00:11:19.185 { 00:11:19.185 "name": "BaseBdev2", 00:11:19.185 "uuid": "61a67984-9183-561c-99e6-b543a308e675", 00:11:19.185 "is_configured": true, 00:11:19.185 "data_offset": 2048, 00:11:19.185 "data_size": 63488 00:11:19.185 }, 00:11:19.185 { 00:11:19.185 "name": "BaseBdev3", 00:11:19.185 "uuid": "d3875cc1-7fdc-56bb-a7a0-400b81ccfd60", 00:11:19.185 "is_configured": true, 00:11:19.185 "data_offset": 2048, 00:11:19.185 "data_size": 63488 00:11:19.185 }, 00:11:19.185 { 00:11:19.185 "name": "BaseBdev4", 00:11:19.185 "uuid": "1139e542-eae2-57ad-9fc4-fad79b93c80f", 00:11:19.185 "is_configured": true, 00:11:19.185 "data_offset": 2048, 00:11:19.185 "data_size": 63488 00:11:19.185 } 00:11:19.185 ] 00:11:19.185 }' 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.185 15:38:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.756 15:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:19.756 15:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.756 15:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.756 [2024-11-25 15:38:18.137561] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:19.756 [2024-11-25 15:38:18.137597] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.756 [2024-11-25 15:38:18.140195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.756 [2024-11-25 15:38:18.140255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.756 [2024-11-25 15:38:18.140299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.756 [2024-11-25 15:38:18.140311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:19.756 { 00:11:19.756 "results": [ 00:11:19.756 { 00:11:19.756 "job": "raid_bdev1", 00:11:19.756 "core_mask": "0x1", 00:11:19.756 "workload": "randrw", 00:11:19.756 "percentage": 50, 00:11:19.756 "status": "finished", 00:11:19.756 "queue_depth": 1, 00:11:19.756 "io_size": 131072, 00:11:19.756 "runtime": 1.395222, 00:11:19.756 "iops": 16008.921877665347, 00:11:19.756 "mibps": 2001.1152347081684, 00:11:19.756 "io_failed": 1, 00:11:19.756 "io_timeout": 0, 00:11:19.756 "avg_latency_us": 87.04073219028956, 00:11:19.756 "min_latency_us": 24.929257641921396, 00:11:19.756 "max_latency_us": 1495.3082969432314 00:11:19.756 } 00:11:19.756 ], 00:11:19.756 "core_count": 1 00:11:19.756 } 00:11:19.756 15:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.756 15:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70874 00:11:19.756 15:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70874 ']' 00:11:19.756 15:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70874 00:11:19.756 15:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:19.756 15:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.756 15:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70874 00:11:19.756 15:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.756 15:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.756 killing process with pid 70874 00:11:19.757 15:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70874' 00:11:19.757 15:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70874 00:11:19.757 [2024-11-25 15:38:18.182311] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.757 15:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70874 00:11:20.016 [2024-11-25 15:38:18.491235] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:20.976 15:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:20.976 15:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.r3x4guaT7k 00:11:20.976 15:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:20.976 15:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:20.976 15:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:20.976 15:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:20.976 15:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:20.976 15:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:20.976 00:11:20.976 real 0m4.590s 00:11:20.976 user 0m5.427s 00:11:20.976 sys 0m0.551s 00:11:20.976 15:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.976 15:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.976 ************************************ 00:11:20.976 END TEST raid_write_error_test 00:11:20.976 ************************************ 00:11:21.236 15:38:19 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:21.236 15:38:19 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:21.236 15:38:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:21.236 15:38:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.236 15:38:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:21.236 ************************************ 00:11:21.236 START TEST raid_state_function_test 00:11:21.236 ************************************ 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71018 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:21.236 Process raid pid: 71018 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71018' 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71018 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71018 ']' 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.236 15:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.236 [2024-11-25 15:38:19.779529] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:11:21.236 [2024-11-25 15:38:19.779729] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.496 [2024-11-25 15:38:19.950433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.496 [2024-11-25 15:38:20.061145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.757 [2024-11-25 15:38:20.249199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.757 [2024-11-25 15:38:20.249317] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.017 15:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.017 15:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:22.017 15:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.017 15:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.017 15:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.018 [2024-11-25 15:38:20.585245] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.018 [2024-11-25 15:38:20.585297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.018 [2024-11-25 15:38:20.585307] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.018 [2024-11-25 15:38:20.585317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.018 [2024-11-25 15:38:20.585323] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.018 [2024-11-25 15:38:20.585332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.018 [2024-11-25 15:38:20.585337] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:22.018 [2024-11-25 15:38:20.585345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:22.018 15:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.018 15:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.018 15:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.018 15:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.018 15:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.018 15:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.018 15:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.018 15:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.018 15:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.018 15:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.018 15:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.018 15:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.018 15:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.018 15:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.018 15:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.018 15:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.018 15:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.018 "name": "Existed_Raid", 00:11:22.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.018 "strip_size_kb": 64, 00:11:22.018 "state": "configuring", 00:11:22.018 "raid_level": "concat", 00:11:22.018 "superblock": false, 00:11:22.018 "num_base_bdevs": 4, 00:11:22.018 "num_base_bdevs_discovered": 0, 00:11:22.018 "num_base_bdevs_operational": 4, 00:11:22.018 "base_bdevs_list": [ 00:11:22.018 { 00:11:22.018 "name": "BaseBdev1", 00:11:22.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.018 "is_configured": false, 00:11:22.018 "data_offset": 0, 00:11:22.018 "data_size": 0 00:11:22.018 }, 00:11:22.018 { 00:11:22.018 "name": "BaseBdev2", 00:11:22.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.018 "is_configured": false, 00:11:22.018 "data_offset": 0, 00:11:22.018 "data_size": 0 00:11:22.018 }, 00:11:22.018 { 00:11:22.018 "name": "BaseBdev3", 00:11:22.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.018 "is_configured": false, 00:11:22.018 "data_offset": 0, 00:11:22.018 "data_size": 0 00:11:22.018 }, 00:11:22.018 { 00:11:22.018 "name": "BaseBdev4", 00:11:22.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.018 "is_configured": false, 00:11:22.018 "data_offset": 0, 00:11:22.018 "data_size": 0 00:11:22.018 } 00:11:22.018 ] 00:11:22.018 }' 00:11:22.018 15:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.018 15:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.588 [2024-11-25 15:38:21.016480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.588 [2024-11-25 15:38:21.016574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.588 [2024-11-25 15:38:21.024450] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.588 [2024-11-25 15:38:21.024534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.588 [2024-11-25 15:38:21.024563] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.588 [2024-11-25 15:38:21.024586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.588 [2024-11-25 15:38:21.024605] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.588 [2024-11-25 15:38:21.024626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.588 [2024-11-25 15:38:21.024644] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:22.588 [2024-11-25 15:38:21.024703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.588 [2024-11-25 15:38:21.066564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.588 BaseBdev1 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.588 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.588 [ 00:11:22.588 { 00:11:22.588 "name": "BaseBdev1", 00:11:22.588 "aliases": [ 00:11:22.588 "be9cf3ac-850a-4cf8-a3ab-e47a9f7d2214" 00:11:22.588 ], 00:11:22.588 "product_name": "Malloc disk", 00:11:22.588 "block_size": 512, 00:11:22.588 "num_blocks": 65536, 00:11:22.588 "uuid": "be9cf3ac-850a-4cf8-a3ab-e47a9f7d2214", 00:11:22.588 "assigned_rate_limits": { 00:11:22.588 "rw_ios_per_sec": 0, 00:11:22.588 "rw_mbytes_per_sec": 0, 00:11:22.588 "r_mbytes_per_sec": 0, 00:11:22.588 "w_mbytes_per_sec": 0 00:11:22.588 }, 00:11:22.588 "claimed": true, 00:11:22.588 "claim_type": "exclusive_write", 00:11:22.588 "zoned": false, 00:11:22.588 "supported_io_types": { 00:11:22.588 "read": true, 00:11:22.588 "write": true, 00:11:22.588 "unmap": true, 00:11:22.588 "flush": true, 00:11:22.588 "reset": true, 00:11:22.588 "nvme_admin": false, 00:11:22.589 "nvme_io": false, 00:11:22.589 "nvme_io_md": false, 00:11:22.589 "write_zeroes": true, 00:11:22.589 "zcopy": true, 00:11:22.589 "get_zone_info": false, 00:11:22.589 "zone_management": false, 00:11:22.589 "zone_append": false, 00:11:22.589 "compare": false, 00:11:22.589 "compare_and_write": false, 00:11:22.589 "abort": true, 00:11:22.589 "seek_hole": false, 00:11:22.589 "seek_data": false, 00:11:22.589 "copy": true, 00:11:22.589 "nvme_iov_md": false 00:11:22.589 }, 00:11:22.589 "memory_domains": [ 00:11:22.589 { 00:11:22.589 "dma_device_id": "system", 00:11:22.589 "dma_device_type": 1 00:11:22.589 }, 00:11:22.589 { 00:11:22.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.589 "dma_device_type": 2 00:11:22.589 } 00:11:22.589 ], 00:11:22.589 "driver_specific": {} 00:11:22.589 } 00:11:22.589 ] 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.589 "name": "Existed_Raid", 00:11:22.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.589 "strip_size_kb": 64, 00:11:22.589 "state": "configuring", 00:11:22.589 "raid_level": "concat", 00:11:22.589 "superblock": false, 00:11:22.589 "num_base_bdevs": 4, 00:11:22.589 "num_base_bdevs_discovered": 1, 00:11:22.589 "num_base_bdevs_operational": 4, 00:11:22.589 "base_bdevs_list": [ 00:11:22.589 { 00:11:22.589 "name": "BaseBdev1", 00:11:22.589 "uuid": "be9cf3ac-850a-4cf8-a3ab-e47a9f7d2214", 00:11:22.589 "is_configured": true, 00:11:22.589 "data_offset": 0, 00:11:22.589 "data_size": 65536 00:11:22.589 }, 00:11:22.589 { 00:11:22.589 "name": "BaseBdev2", 00:11:22.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.589 "is_configured": false, 00:11:22.589 "data_offset": 0, 00:11:22.589 "data_size": 0 00:11:22.589 }, 00:11:22.589 { 00:11:22.589 "name": "BaseBdev3", 00:11:22.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.589 "is_configured": false, 00:11:22.589 "data_offset": 0, 00:11:22.589 "data_size": 0 00:11:22.589 }, 00:11:22.589 { 00:11:22.589 "name": "BaseBdev4", 00:11:22.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.589 "is_configured": false, 00:11:22.589 "data_offset": 0, 00:11:22.589 "data_size": 0 00:11:22.589 } 00:11:22.589 ] 00:11:22.589 }' 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.589 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.849 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.849 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.849 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.849 [2024-11-25 15:38:21.513880] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.849 [2024-11-25 15:38:21.513937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:22.849 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.849 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.849 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.849 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.849 [2024-11-25 15:38:21.525919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.849 [2024-11-25 15:38:21.528088] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.849 [2024-11-25 15:38:21.528189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.849 [2024-11-25 15:38:21.528206] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.849 [2024-11-25 15:38:21.528220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.849 [2024-11-25 15:38:21.528228] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:22.849 [2024-11-25 15:38:21.528238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.110 "name": "Existed_Raid", 00:11:23.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.110 "strip_size_kb": 64, 00:11:23.110 "state": "configuring", 00:11:23.110 "raid_level": "concat", 00:11:23.110 "superblock": false, 00:11:23.110 "num_base_bdevs": 4, 00:11:23.110 "num_base_bdevs_discovered": 1, 00:11:23.110 "num_base_bdevs_operational": 4, 00:11:23.110 "base_bdevs_list": [ 00:11:23.110 { 00:11:23.110 "name": "BaseBdev1", 00:11:23.110 "uuid": "be9cf3ac-850a-4cf8-a3ab-e47a9f7d2214", 00:11:23.110 "is_configured": true, 00:11:23.110 "data_offset": 0, 00:11:23.110 "data_size": 65536 00:11:23.110 }, 00:11:23.110 { 00:11:23.110 "name": "BaseBdev2", 00:11:23.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.110 "is_configured": false, 00:11:23.110 "data_offset": 0, 00:11:23.110 "data_size": 0 00:11:23.110 }, 00:11:23.110 { 00:11:23.110 "name": "BaseBdev3", 00:11:23.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.110 "is_configured": false, 00:11:23.110 "data_offset": 0, 00:11:23.110 "data_size": 0 00:11:23.110 }, 00:11:23.110 { 00:11:23.110 "name": "BaseBdev4", 00:11:23.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.110 "is_configured": false, 00:11:23.110 "data_offset": 0, 00:11:23.110 "data_size": 0 00:11:23.110 } 00:11:23.110 ] 00:11:23.110 }' 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.110 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.371 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:23.371 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.371 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.371 [2024-11-25 15:38:21.993527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.371 BaseBdev2 00:11:23.371 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.371 15:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:23.371 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:23.371 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.371 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:23.371 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.371 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.371 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.371 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.371 15:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.371 [ 00:11:23.371 { 00:11:23.371 "name": "BaseBdev2", 00:11:23.371 "aliases": [ 00:11:23.371 "b97c85bd-f41b-4fc3-b80a-9701475a79cf" 00:11:23.371 ], 00:11:23.371 "product_name": "Malloc disk", 00:11:23.371 "block_size": 512, 00:11:23.371 "num_blocks": 65536, 00:11:23.371 "uuid": "b97c85bd-f41b-4fc3-b80a-9701475a79cf", 00:11:23.371 "assigned_rate_limits": { 00:11:23.371 "rw_ios_per_sec": 0, 00:11:23.371 "rw_mbytes_per_sec": 0, 00:11:23.371 "r_mbytes_per_sec": 0, 00:11:23.371 "w_mbytes_per_sec": 0 00:11:23.371 }, 00:11:23.371 "claimed": true, 00:11:23.371 "claim_type": "exclusive_write", 00:11:23.371 "zoned": false, 00:11:23.371 "supported_io_types": { 00:11:23.371 "read": true, 00:11:23.371 "write": true, 00:11:23.371 "unmap": true, 00:11:23.371 "flush": true, 00:11:23.371 "reset": true, 00:11:23.371 "nvme_admin": false, 00:11:23.371 "nvme_io": false, 00:11:23.371 "nvme_io_md": false, 00:11:23.371 "write_zeroes": true, 00:11:23.371 "zcopy": true, 00:11:23.371 "get_zone_info": false, 00:11:23.371 "zone_management": false, 00:11:23.371 "zone_append": false, 00:11:23.371 "compare": false, 00:11:23.371 "compare_and_write": false, 00:11:23.371 "abort": true, 00:11:23.371 "seek_hole": false, 00:11:23.371 "seek_data": false, 00:11:23.371 "copy": true, 00:11:23.371 "nvme_iov_md": false 00:11:23.371 }, 00:11:23.371 "memory_domains": [ 00:11:23.371 { 00:11:23.371 "dma_device_id": "system", 00:11:23.371 "dma_device_type": 1 00:11:23.371 }, 00:11:23.371 { 00:11:23.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.371 "dma_device_type": 2 00:11:23.371 } 00:11:23.371 ], 00:11:23.371 "driver_specific": {} 00:11:23.371 } 00:11:23.371 ] 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.371 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.631 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.631 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.631 "name": "Existed_Raid", 00:11:23.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.631 "strip_size_kb": 64, 00:11:23.631 "state": "configuring", 00:11:23.631 "raid_level": "concat", 00:11:23.631 "superblock": false, 00:11:23.631 "num_base_bdevs": 4, 00:11:23.631 "num_base_bdevs_discovered": 2, 00:11:23.632 "num_base_bdevs_operational": 4, 00:11:23.632 "base_bdevs_list": [ 00:11:23.632 { 00:11:23.632 "name": "BaseBdev1", 00:11:23.632 "uuid": "be9cf3ac-850a-4cf8-a3ab-e47a9f7d2214", 00:11:23.632 "is_configured": true, 00:11:23.632 "data_offset": 0, 00:11:23.632 "data_size": 65536 00:11:23.632 }, 00:11:23.632 { 00:11:23.632 "name": "BaseBdev2", 00:11:23.632 "uuid": "b97c85bd-f41b-4fc3-b80a-9701475a79cf", 00:11:23.632 "is_configured": true, 00:11:23.632 "data_offset": 0, 00:11:23.632 "data_size": 65536 00:11:23.632 }, 00:11:23.632 { 00:11:23.632 "name": "BaseBdev3", 00:11:23.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.632 "is_configured": false, 00:11:23.632 "data_offset": 0, 00:11:23.632 "data_size": 0 00:11:23.632 }, 00:11:23.632 { 00:11:23.632 "name": "BaseBdev4", 00:11:23.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.632 "is_configured": false, 00:11:23.632 "data_offset": 0, 00:11:23.632 "data_size": 0 00:11:23.632 } 00:11:23.632 ] 00:11:23.632 }' 00:11:23.632 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.632 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.892 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:23.892 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.892 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.892 [2024-11-25 15:38:22.534863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.892 BaseBdev3 00:11:23.892 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.892 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:23.892 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:23.892 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.892 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:23.892 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.892 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.892 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.892 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.892 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.892 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.892 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:23.892 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.892 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.892 [ 00:11:23.892 { 00:11:23.892 "name": "BaseBdev3", 00:11:23.892 "aliases": [ 00:11:23.892 "3e31e6e7-2269-4e65-b1ca-367f9aafe582" 00:11:23.892 ], 00:11:23.892 "product_name": "Malloc disk", 00:11:23.892 "block_size": 512, 00:11:23.892 "num_blocks": 65536, 00:11:23.892 "uuid": "3e31e6e7-2269-4e65-b1ca-367f9aafe582", 00:11:23.892 "assigned_rate_limits": { 00:11:23.892 "rw_ios_per_sec": 0, 00:11:23.892 "rw_mbytes_per_sec": 0, 00:11:23.892 "r_mbytes_per_sec": 0, 00:11:23.892 "w_mbytes_per_sec": 0 00:11:23.892 }, 00:11:23.892 "claimed": true, 00:11:23.892 "claim_type": "exclusive_write", 00:11:23.892 "zoned": false, 00:11:23.892 "supported_io_types": { 00:11:23.892 "read": true, 00:11:23.892 "write": true, 00:11:23.892 "unmap": true, 00:11:23.892 "flush": true, 00:11:23.892 "reset": true, 00:11:23.892 "nvme_admin": false, 00:11:23.892 "nvme_io": false, 00:11:23.892 "nvme_io_md": false, 00:11:23.892 "write_zeroes": true, 00:11:23.892 "zcopy": true, 00:11:23.892 "get_zone_info": false, 00:11:23.892 "zone_management": false, 00:11:23.892 "zone_append": false, 00:11:23.892 "compare": false, 00:11:23.892 "compare_and_write": false, 00:11:23.892 "abort": true, 00:11:23.892 "seek_hole": false, 00:11:23.892 "seek_data": false, 00:11:23.892 "copy": true, 00:11:23.892 "nvme_iov_md": false 00:11:23.892 }, 00:11:23.892 "memory_domains": [ 00:11:23.892 { 00:11:23.892 "dma_device_id": "system", 00:11:23.892 "dma_device_type": 1 00:11:23.892 }, 00:11:23.892 { 00:11:23.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.153 "dma_device_type": 2 00:11:24.153 } 00:11:24.153 ], 00:11:24.153 "driver_specific": {} 00:11:24.153 } 00:11:24.153 ] 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.153 "name": "Existed_Raid", 00:11:24.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.153 "strip_size_kb": 64, 00:11:24.153 "state": "configuring", 00:11:24.153 "raid_level": "concat", 00:11:24.153 "superblock": false, 00:11:24.153 "num_base_bdevs": 4, 00:11:24.153 "num_base_bdevs_discovered": 3, 00:11:24.153 "num_base_bdevs_operational": 4, 00:11:24.153 "base_bdevs_list": [ 00:11:24.153 { 00:11:24.153 "name": "BaseBdev1", 00:11:24.153 "uuid": "be9cf3ac-850a-4cf8-a3ab-e47a9f7d2214", 00:11:24.153 "is_configured": true, 00:11:24.153 "data_offset": 0, 00:11:24.153 "data_size": 65536 00:11:24.153 }, 00:11:24.153 { 00:11:24.153 "name": "BaseBdev2", 00:11:24.153 "uuid": "b97c85bd-f41b-4fc3-b80a-9701475a79cf", 00:11:24.153 "is_configured": true, 00:11:24.153 "data_offset": 0, 00:11:24.153 "data_size": 65536 00:11:24.153 }, 00:11:24.153 { 00:11:24.153 "name": "BaseBdev3", 00:11:24.153 "uuid": "3e31e6e7-2269-4e65-b1ca-367f9aafe582", 00:11:24.153 "is_configured": true, 00:11:24.153 "data_offset": 0, 00:11:24.153 "data_size": 65536 00:11:24.153 }, 00:11:24.153 { 00:11:24.153 "name": "BaseBdev4", 00:11:24.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.153 "is_configured": false, 00:11:24.153 "data_offset": 0, 00:11:24.153 "data_size": 0 00:11:24.153 } 00:11:24.153 ] 00:11:24.153 }' 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.153 15:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.414 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:24.414 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.414 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.414 [2024-11-25 15:38:23.058975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:24.414 [2024-11-25 15:38:23.059113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:24.414 [2024-11-25 15:38:23.059140] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:24.414 [2024-11-25 15:38:23.059457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:24.414 [2024-11-25 15:38:23.059671] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:24.414 [2024-11-25 15:38:23.059721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:24.414 [2024-11-25 15:38:23.060037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.414 BaseBdev4 00:11:24.414 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.414 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:24.414 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:24.415 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.415 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:24.415 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.415 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.415 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.415 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.415 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.415 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.415 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:24.415 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.415 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.415 [ 00:11:24.415 { 00:11:24.415 "name": "BaseBdev4", 00:11:24.415 "aliases": [ 00:11:24.415 "0ca9acbc-4ad3-419d-a30b-b5d4495c1146" 00:11:24.415 ], 00:11:24.415 "product_name": "Malloc disk", 00:11:24.415 "block_size": 512, 00:11:24.415 "num_blocks": 65536, 00:11:24.415 "uuid": "0ca9acbc-4ad3-419d-a30b-b5d4495c1146", 00:11:24.415 "assigned_rate_limits": { 00:11:24.415 "rw_ios_per_sec": 0, 00:11:24.415 "rw_mbytes_per_sec": 0, 00:11:24.415 "r_mbytes_per_sec": 0, 00:11:24.415 "w_mbytes_per_sec": 0 00:11:24.415 }, 00:11:24.415 "claimed": true, 00:11:24.415 "claim_type": "exclusive_write", 00:11:24.415 "zoned": false, 00:11:24.415 "supported_io_types": { 00:11:24.415 "read": true, 00:11:24.415 "write": true, 00:11:24.415 "unmap": true, 00:11:24.415 "flush": true, 00:11:24.415 "reset": true, 00:11:24.415 "nvme_admin": false, 00:11:24.415 "nvme_io": false, 00:11:24.415 "nvme_io_md": false, 00:11:24.415 "write_zeroes": true, 00:11:24.415 "zcopy": true, 00:11:24.675 "get_zone_info": false, 00:11:24.675 "zone_management": false, 00:11:24.675 "zone_append": false, 00:11:24.675 "compare": false, 00:11:24.675 "compare_and_write": false, 00:11:24.675 "abort": true, 00:11:24.675 "seek_hole": false, 00:11:24.675 "seek_data": false, 00:11:24.675 "copy": true, 00:11:24.675 "nvme_iov_md": false 00:11:24.675 }, 00:11:24.675 "memory_domains": [ 00:11:24.675 { 00:11:24.675 "dma_device_id": "system", 00:11:24.675 "dma_device_type": 1 00:11:24.675 }, 00:11:24.675 { 00:11:24.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.675 "dma_device_type": 2 00:11:24.675 } 00:11:24.675 ], 00:11:24.675 "driver_specific": {} 00:11:24.675 } 00:11:24.675 ] 00:11:24.675 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.675 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.675 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.675 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.675 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:24.675 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.675 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.675 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.675 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.675 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.675 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.675 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.675 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.675 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.676 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.676 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.676 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.676 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.676 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.676 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.676 "name": "Existed_Raid", 00:11:24.676 "uuid": "66718b00-9ac2-4abd-b99a-0767efdf967a", 00:11:24.676 "strip_size_kb": 64, 00:11:24.676 "state": "online", 00:11:24.676 "raid_level": "concat", 00:11:24.676 "superblock": false, 00:11:24.676 "num_base_bdevs": 4, 00:11:24.676 "num_base_bdevs_discovered": 4, 00:11:24.676 "num_base_bdevs_operational": 4, 00:11:24.676 "base_bdevs_list": [ 00:11:24.676 { 00:11:24.676 "name": "BaseBdev1", 00:11:24.676 "uuid": "be9cf3ac-850a-4cf8-a3ab-e47a9f7d2214", 00:11:24.676 "is_configured": true, 00:11:24.676 "data_offset": 0, 00:11:24.676 "data_size": 65536 00:11:24.676 }, 00:11:24.676 { 00:11:24.676 "name": "BaseBdev2", 00:11:24.676 "uuid": "b97c85bd-f41b-4fc3-b80a-9701475a79cf", 00:11:24.676 "is_configured": true, 00:11:24.676 "data_offset": 0, 00:11:24.676 "data_size": 65536 00:11:24.676 }, 00:11:24.676 { 00:11:24.676 "name": "BaseBdev3", 00:11:24.676 "uuid": "3e31e6e7-2269-4e65-b1ca-367f9aafe582", 00:11:24.676 "is_configured": true, 00:11:24.676 "data_offset": 0, 00:11:24.676 "data_size": 65536 00:11:24.676 }, 00:11:24.676 { 00:11:24.676 "name": "BaseBdev4", 00:11:24.676 "uuid": "0ca9acbc-4ad3-419d-a30b-b5d4495c1146", 00:11:24.676 "is_configured": true, 00:11:24.676 "data_offset": 0, 00:11:24.676 "data_size": 65536 00:11:24.676 } 00:11:24.676 ] 00:11:24.676 }' 00:11:24.676 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.676 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.936 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:24.936 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:24.936 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:24.936 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:24.936 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:24.936 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:24.936 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:24.936 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:24.936 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.936 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.936 [2024-11-25 15:38:23.554521] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.936 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.936 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:24.936 "name": "Existed_Raid", 00:11:24.936 "aliases": [ 00:11:24.936 "66718b00-9ac2-4abd-b99a-0767efdf967a" 00:11:24.936 ], 00:11:24.936 "product_name": "Raid Volume", 00:11:24.936 "block_size": 512, 00:11:24.936 "num_blocks": 262144, 00:11:24.936 "uuid": "66718b00-9ac2-4abd-b99a-0767efdf967a", 00:11:24.936 "assigned_rate_limits": { 00:11:24.936 "rw_ios_per_sec": 0, 00:11:24.936 "rw_mbytes_per_sec": 0, 00:11:24.936 "r_mbytes_per_sec": 0, 00:11:24.936 "w_mbytes_per_sec": 0 00:11:24.936 }, 00:11:24.936 "claimed": false, 00:11:24.936 "zoned": false, 00:11:24.936 "supported_io_types": { 00:11:24.936 "read": true, 00:11:24.936 "write": true, 00:11:24.936 "unmap": true, 00:11:24.936 "flush": true, 00:11:24.936 "reset": true, 00:11:24.936 "nvme_admin": false, 00:11:24.936 "nvme_io": false, 00:11:24.936 "nvme_io_md": false, 00:11:24.936 "write_zeroes": true, 00:11:24.936 "zcopy": false, 00:11:24.936 "get_zone_info": false, 00:11:24.936 "zone_management": false, 00:11:24.936 "zone_append": false, 00:11:24.936 "compare": false, 00:11:24.936 "compare_and_write": false, 00:11:24.936 "abort": false, 00:11:24.936 "seek_hole": false, 00:11:24.936 "seek_data": false, 00:11:24.936 "copy": false, 00:11:24.936 "nvme_iov_md": false 00:11:24.936 }, 00:11:24.936 "memory_domains": [ 00:11:24.936 { 00:11:24.936 "dma_device_id": "system", 00:11:24.936 "dma_device_type": 1 00:11:24.936 }, 00:11:24.936 { 00:11:24.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.936 "dma_device_type": 2 00:11:24.936 }, 00:11:24.936 { 00:11:24.936 "dma_device_id": "system", 00:11:24.936 "dma_device_type": 1 00:11:24.936 }, 00:11:24.936 { 00:11:24.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.936 "dma_device_type": 2 00:11:24.936 }, 00:11:24.936 { 00:11:24.936 "dma_device_id": "system", 00:11:24.936 "dma_device_type": 1 00:11:24.936 }, 00:11:24.936 { 00:11:24.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.936 "dma_device_type": 2 00:11:24.936 }, 00:11:24.936 { 00:11:24.936 "dma_device_id": "system", 00:11:24.936 "dma_device_type": 1 00:11:24.936 }, 00:11:24.936 { 00:11:24.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.936 "dma_device_type": 2 00:11:24.936 } 00:11:24.936 ], 00:11:24.936 "driver_specific": { 00:11:24.936 "raid": { 00:11:24.936 "uuid": "66718b00-9ac2-4abd-b99a-0767efdf967a", 00:11:24.936 "strip_size_kb": 64, 00:11:24.936 "state": "online", 00:11:24.936 "raid_level": "concat", 00:11:24.936 "superblock": false, 00:11:24.936 "num_base_bdevs": 4, 00:11:24.936 "num_base_bdevs_discovered": 4, 00:11:24.936 "num_base_bdevs_operational": 4, 00:11:24.936 "base_bdevs_list": [ 00:11:24.936 { 00:11:24.936 "name": "BaseBdev1", 00:11:24.936 "uuid": "be9cf3ac-850a-4cf8-a3ab-e47a9f7d2214", 00:11:24.936 "is_configured": true, 00:11:24.936 "data_offset": 0, 00:11:24.937 "data_size": 65536 00:11:24.937 }, 00:11:24.937 { 00:11:24.937 "name": "BaseBdev2", 00:11:24.937 "uuid": "b97c85bd-f41b-4fc3-b80a-9701475a79cf", 00:11:24.937 "is_configured": true, 00:11:24.937 "data_offset": 0, 00:11:24.937 "data_size": 65536 00:11:24.937 }, 00:11:24.937 { 00:11:24.937 "name": "BaseBdev3", 00:11:24.937 "uuid": "3e31e6e7-2269-4e65-b1ca-367f9aafe582", 00:11:24.937 "is_configured": true, 00:11:24.937 "data_offset": 0, 00:11:24.937 "data_size": 65536 00:11:24.937 }, 00:11:24.937 { 00:11:24.937 "name": "BaseBdev4", 00:11:24.937 "uuid": "0ca9acbc-4ad3-419d-a30b-b5d4495c1146", 00:11:24.937 "is_configured": true, 00:11:24.937 "data_offset": 0, 00:11:24.937 "data_size": 65536 00:11:24.937 } 00:11:24.937 ] 00:11:24.937 } 00:11:24.937 } 00:11:24.937 }' 00:11:24.937 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:25.197 BaseBdev2 00:11:25.197 BaseBdev3 00:11:25.197 BaseBdev4' 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.197 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.458 [2024-11-25 15:38:23.893655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:25.458 [2024-11-25 15:38:23.893686] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.458 [2024-11-25 15:38:23.893734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.458 15:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.458 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.458 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.458 "name": "Existed_Raid", 00:11:25.458 "uuid": "66718b00-9ac2-4abd-b99a-0767efdf967a", 00:11:25.458 "strip_size_kb": 64, 00:11:25.458 "state": "offline", 00:11:25.458 "raid_level": "concat", 00:11:25.458 "superblock": false, 00:11:25.458 "num_base_bdevs": 4, 00:11:25.458 "num_base_bdevs_discovered": 3, 00:11:25.458 "num_base_bdevs_operational": 3, 00:11:25.458 "base_bdevs_list": [ 00:11:25.458 { 00:11:25.458 "name": null, 00:11:25.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.458 "is_configured": false, 00:11:25.458 "data_offset": 0, 00:11:25.458 "data_size": 65536 00:11:25.458 }, 00:11:25.458 { 00:11:25.458 "name": "BaseBdev2", 00:11:25.458 "uuid": "b97c85bd-f41b-4fc3-b80a-9701475a79cf", 00:11:25.458 "is_configured": true, 00:11:25.458 "data_offset": 0, 00:11:25.458 "data_size": 65536 00:11:25.458 }, 00:11:25.458 { 00:11:25.458 "name": "BaseBdev3", 00:11:25.458 "uuid": "3e31e6e7-2269-4e65-b1ca-367f9aafe582", 00:11:25.458 "is_configured": true, 00:11:25.458 "data_offset": 0, 00:11:25.458 "data_size": 65536 00:11:25.458 }, 00:11:25.458 { 00:11:25.458 "name": "BaseBdev4", 00:11:25.458 "uuid": "0ca9acbc-4ad3-419d-a30b-b5d4495c1146", 00:11:25.458 "is_configured": true, 00:11:25.458 "data_offset": 0, 00:11:25.458 "data_size": 65536 00:11:25.458 } 00:11:25.458 ] 00:11:25.458 }' 00:11:25.458 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.458 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.028 [2024-11-25 15:38:24.502715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.028 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.028 [2024-11-25 15:38:24.651099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:26.288 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.288 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:26.288 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.288 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.289 [2024-11-25 15:38:24.803881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:26.289 [2024-11-25 15:38:24.803973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.289 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.550 BaseBdev2 00:11:26.550 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.550 15:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:26.550 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:26.550 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.550 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:26.550 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.550 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.550 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.550 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.550 15:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.550 [ 00:11:26.550 { 00:11:26.550 "name": "BaseBdev2", 00:11:26.550 "aliases": [ 00:11:26.550 "ebaadbd1-06f8-4821-8712-8b7b37a2ab02" 00:11:26.550 ], 00:11:26.550 "product_name": "Malloc disk", 00:11:26.550 "block_size": 512, 00:11:26.550 "num_blocks": 65536, 00:11:26.550 "uuid": "ebaadbd1-06f8-4821-8712-8b7b37a2ab02", 00:11:26.550 "assigned_rate_limits": { 00:11:26.550 "rw_ios_per_sec": 0, 00:11:26.550 "rw_mbytes_per_sec": 0, 00:11:26.550 "r_mbytes_per_sec": 0, 00:11:26.550 "w_mbytes_per_sec": 0 00:11:26.550 }, 00:11:26.550 "claimed": false, 00:11:26.550 "zoned": false, 00:11:26.550 "supported_io_types": { 00:11:26.550 "read": true, 00:11:26.550 "write": true, 00:11:26.550 "unmap": true, 00:11:26.550 "flush": true, 00:11:26.550 "reset": true, 00:11:26.550 "nvme_admin": false, 00:11:26.550 "nvme_io": false, 00:11:26.550 "nvme_io_md": false, 00:11:26.550 "write_zeroes": true, 00:11:26.550 "zcopy": true, 00:11:26.550 "get_zone_info": false, 00:11:26.550 "zone_management": false, 00:11:26.550 "zone_append": false, 00:11:26.550 "compare": false, 00:11:26.550 "compare_and_write": false, 00:11:26.550 "abort": true, 00:11:26.550 "seek_hole": false, 00:11:26.550 "seek_data": false, 00:11:26.550 "copy": true, 00:11:26.550 "nvme_iov_md": false 00:11:26.550 }, 00:11:26.550 "memory_domains": [ 00:11:26.550 { 00:11:26.550 "dma_device_id": "system", 00:11:26.550 "dma_device_type": 1 00:11:26.550 }, 00:11:26.550 { 00:11:26.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.550 "dma_device_type": 2 00:11:26.550 } 00:11:26.550 ], 00:11:26.550 "driver_specific": {} 00:11:26.550 } 00:11:26.550 ] 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.550 BaseBdev3 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.550 [ 00:11:26.550 { 00:11:26.550 "name": "BaseBdev3", 00:11:26.550 "aliases": [ 00:11:26.550 "890ddf75-f6d8-4e91-8f66-904eca07304e" 00:11:26.550 ], 00:11:26.550 "product_name": "Malloc disk", 00:11:26.550 "block_size": 512, 00:11:26.550 "num_blocks": 65536, 00:11:26.550 "uuid": "890ddf75-f6d8-4e91-8f66-904eca07304e", 00:11:26.550 "assigned_rate_limits": { 00:11:26.550 "rw_ios_per_sec": 0, 00:11:26.550 "rw_mbytes_per_sec": 0, 00:11:26.550 "r_mbytes_per_sec": 0, 00:11:26.550 "w_mbytes_per_sec": 0 00:11:26.550 }, 00:11:26.550 "claimed": false, 00:11:26.550 "zoned": false, 00:11:26.550 "supported_io_types": { 00:11:26.550 "read": true, 00:11:26.550 "write": true, 00:11:26.550 "unmap": true, 00:11:26.550 "flush": true, 00:11:26.550 "reset": true, 00:11:26.550 "nvme_admin": false, 00:11:26.550 "nvme_io": false, 00:11:26.550 "nvme_io_md": false, 00:11:26.550 "write_zeroes": true, 00:11:26.550 "zcopy": true, 00:11:26.550 "get_zone_info": false, 00:11:26.550 "zone_management": false, 00:11:26.550 "zone_append": false, 00:11:26.550 "compare": false, 00:11:26.550 "compare_and_write": false, 00:11:26.550 "abort": true, 00:11:26.550 "seek_hole": false, 00:11:26.550 "seek_data": false, 00:11:26.550 "copy": true, 00:11:26.550 "nvme_iov_md": false 00:11:26.550 }, 00:11:26.550 "memory_domains": [ 00:11:26.550 { 00:11:26.550 "dma_device_id": "system", 00:11:26.550 "dma_device_type": 1 00:11:26.550 }, 00:11:26.550 { 00:11:26.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.550 "dma_device_type": 2 00:11:26.550 } 00:11:26.550 ], 00:11:26.550 "driver_specific": {} 00:11:26.550 } 00:11:26.550 ] 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.550 BaseBdev4 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.550 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.550 [ 00:11:26.550 { 00:11:26.550 "name": "BaseBdev4", 00:11:26.550 "aliases": [ 00:11:26.550 "9cd83e5b-0985-4e8d-8b1a-d84e0507d9c3" 00:11:26.550 ], 00:11:26.550 "product_name": "Malloc disk", 00:11:26.550 "block_size": 512, 00:11:26.550 "num_blocks": 65536, 00:11:26.550 "uuid": "9cd83e5b-0985-4e8d-8b1a-d84e0507d9c3", 00:11:26.550 "assigned_rate_limits": { 00:11:26.550 "rw_ios_per_sec": 0, 00:11:26.550 "rw_mbytes_per_sec": 0, 00:11:26.550 "r_mbytes_per_sec": 0, 00:11:26.550 "w_mbytes_per_sec": 0 00:11:26.550 }, 00:11:26.550 "claimed": false, 00:11:26.550 "zoned": false, 00:11:26.550 "supported_io_types": { 00:11:26.550 "read": true, 00:11:26.550 "write": true, 00:11:26.550 "unmap": true, 00:11:26.550 "flush": true, 00:11:26.550 "reset": true, 00:11:26.551 "nvme_admin": false, 00:11:26.551 "nvme_io": false, 00:11:26.551 "nvme_io_md": false, 00:11:26.551 "write_zeroes": true, 00:11:26.551 "zcopy": true, 00:11:26.551 "get_zone_info": false, 00:11:26.551 "zone_management": false, 00:11:26.551 "zone_append": false, 00:11:26.551 "compare": false, 00:11:26.551 "compare_and_write": false, 00:11:26.551 "abort": true, 00:11:26.551 "seek_hole": false, 00:11:26.551 "seek_data": false, 00:11:26.551 "copy": true, 00:11:26.551 "nvme_iov_md": false 00:11:26.551 }, 00:11:26.551 "memory_domains": [ 00:11:26.551 { 00:11:26.551 "dma_device_id": "system", 00:11:26.551 "dma_device_type": 1 00:11:26.551 }, 00:11:26.551 { 00:11:26.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.551 "dma_device_type": 2 00:11:26.551 } 00:11:26.551 ], 00:11:26.551 "driver_specific": {} 00:11:26.551 } 00:11:26.551 ] 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.551 [2024-11-25 15:38:25.191760] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.551 [2024-11-25 15:38:25.191865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.551 [2024-11-25 15:38:25.191905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.551 [2024-11-25 15:38:25.193747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.551 [2024-11-25 15:38:25.193839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.551 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.811 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.811 "name": "Existed_Raid", 00:11:26.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.811 "strip_size_kb": 64, 00:11:26.811 "state": "configuring", 00:11:26.811 "raid_level": "concat", 00:11:26.811 "superblock": false, 00:11:26.811 "num_base_bdevs": 4, 00:11:26.811 "num_base_bdevs_discovered": 3, 00:11:26.811 "num_base_bdevs_operational": 4, 00:11:26.811 "base_bdevs_list": [ 00:11:26.811 { 00:11:26.811 "name": "BaseBdev1", 00:11:26.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.811 "is_configured": false, 00:11:26.811 "data_offset": 0, 00:11:26.811 "data_size": 0 00:11:26.811 }, 00:11:26.811 { 00:11:26.811 "name": "BaseBdev2", 00:11:26.811 "uuid": "ebaadbd1-06f8-4821-8712-8b7b37a2ab02", 00:11:26.811 "is_configured": true, 00:11:26.811 "data_offset": 0, 00:11:26.811 "data_size": 65536 00:11:26.811 }, 00:11:26.811 { 00:11:26.811 "name": "BaseBdev3", 00:11:26.811 "uuid": "890ddf75-f6d8-4e91-8f66-904eca07304e", 00:11:26.811 "is_configured": true, 00:11:26.811 "data_offset": 0, 00:11:26.811 "data_size": 65536 00:11:26.811 }, 00:11:26.811 { 00:11:26.812 "name": "BaseBdev4", 00:11:26.812 "uuid": "9cd83e5b-0985-4e8d-8b1a-d84e0507d9c3", 00:11:26.812 "is_configured": true, 00:11:26.812 "data_offset": 0, 00:11:26.812 "data_size": 65536 00:11:26.812 } 00:11:26.812 ] 00:11:26.812 }' 00:11:26.812 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.812 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.071 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.072 [2024-11-25 15:38:25.611070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.072 "name": "Existed_Raid", 00:11:27.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.072 "strip_size_kb": 64, 00:11:27.072 "state": "configuring", 00:11:27.072 "raid_level": "concat", 00:11:27.072 "superblock": false, 00:11:27.072 "num_base_bdevs": 4, 00:11:27.072 "num_base_bdevs_discovered": 2, 00:11:27.072 "num_base_bdevs_operational": 4, 00:11:27.072 "base_bdevs_list": [ 00:11:27.072 { 00:11:27.072 "name": "BaseBdev1", 00:11:27.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.072 "is_configured": false, 00:11:27.072 "data_offset": 0, 00:11:27.072 "data_size": 0 00:11:27.072 }, 00:11:27.072 { 00:11:27.072 "name": null, 00:11:27.072 "uuid": "ebaadbd1-06f8-4821-8712-8b7b37a2ab02", 00:11:27.072 "is_configured": false, 00:11:27.072 "data_offset": 0, 00:11:27.072 "data_size": 65536 00:11:27.072 }, 00:11:27.072 { 00:11:27.072 "name": "BaseBdev3", 00:11:27.072 "uuid": "890ddf75-f6d8-4e91-8f66-904eca07304e", 00:11:27.072 "is_configured": true, 00:11:27.072 "data_offset": 0, 00:11:27.072 "data_size": 65536 00:11:27.072 }, 00:11:27.072 { 00:11:27.072 "name": "BaseBdev4", 00:11:27.072 "uuid": "9cd83e5b-0985-4e8d-8b1a-d84e0507d9c3", 00:11:27.072 "is_configured": true, 00:11:27.072 "data_offset": 0, 00:11:27.072 "data_size": 65536 00:11:27.072 } 00:11:27.072 ] 00:11:27.072 }' 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.072 15:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.332 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.332 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.332 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:27.332 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.592 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.592 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:27.592 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:27.592 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.592 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.592 [2024-11-25 15:38:26.090780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.592 BaseBdev1 00:11:27.592 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.592 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:27.592 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:27.592 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.592 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:27.592 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.592 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.592 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.592 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.592 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.592 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.592 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.592 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.592 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.592 [ 00:11:27.592 { 00:11:27.592 "name": "BaseBdev1", 00:11:27.592 "aliases": [ 00:11:27.592 "68720ed9-d7f2-41ec-b8a0-f773954f3740" 00:11:27.592 ], 00:11:27.592 "product_name": "Malloc disk", 00:11:27.592 "block_size": 512, 00:11:27.592 "num_blocks": 65536, 00:11:27.592 "uuid": "68720ed9-d7f2-41ec-b8a0-f773954f3740", 00:11:27.592 "assigned_rate_limits": { 00:11:27.592 "rw_ios_per_sec": 0, 00:11:27.592 "rw_mbytes_per_sec": 0, 00:11:27.592 "r_mbytes_per_sec": 0, 00:11:27.592 "w_mbytes_per_sec": 0 00:11:27.592 }, 00:11:27.592 "claimed": true, 00:11:27.593 "claim_type": "exclusive_write", 00:11:27.593 "zoned": false, 00:11:27.593 "supported_io_types": { 00:11:27.593 "read": true, 00:11:27.593 "write": true, 00:11:27.593 "unmap": true, 00:11:27.593 "flush": true, 00:11:27.593 "reset": true, 00:11:27.593 "nvme_admin": false, 00:11:27.593 "nvme_io": false, 00:11:27.593 "nvme_io_md": false, 00:11:27.593 "write_zeroes": true, 00:11:27.593 "zcopy": true, 00:11:27.593 "get_zone_info": false, 00:11:27.593 "zone_management": false, 00:11:27.593 "zone_append": false, 00:11:27.593 "compare": false, 00:11:27.593 "compare_and_write": false, 00:11:27.593 "abort": true, 00:11:27.593 "seek_hole": false, 00:11:27.593 "seek_data": false, 00:11:27.593 "copy": true, 00:11:27.593 "nvme_iov_md": false 00:11:27.593 }, 00:11:27.593 "memory_domains": [ 00:11:27.593 { 00:11:27.593 "dma_device_id": "system", 00:11:27.593 "dma_device_type": 1 00:11:27.593 }, 00:11:27.593 { 00:11:27.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.593 "dma_device_type": 2 00:11:27.593 } 00:11:27.593 ], 00:11:27.593 "driver_specific": {} 00:11:27.593 } 00:11:27.593 ] 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.593 "name": "Existed_Raid", 00:11:27.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.593 "strip_size_kb": 64, 00:11:27.593 "state": "configuring", 00:11:27.593 "raid_level": "concat", 00:11:27.593 "superblock": false, 00:11:27.593 "num_base_bdevs": 4, 00:11:27.593 "num_base_bdevs_discovered": 3, 00:11:27.593 "num_base_bdevs_operational": 4, 00:11:27.593 "base_bdevs_list": [ 00:11:27.593 { 00:11:27.593 "name": "BaseBdev1", 00:11:27.593 "uuid": "68720ed9-d7f2-41ec-b8a0-f773954f3740", 00:11:27.593 "is_configured": true, 00:11:27.593 "data_offset": 0, 00:11:27.593 "data_size": 65536 00:11:27.593 }, 00:11:27.593 { 00:11:27.593 "name": null, 00:11:27.593 "uuid": "ebaadbd1-06f8-4821-8712-8b7b37a2ab02", 00:11:27.593 "is_configured": false, 00:11:27.593 "data_offset": 0, 00:11:27.593 "data_size": 65536 00:11:27.593 }, 00:11:27.593 { 00:11:27.593 "name": "BaseBdev3", 00:11:27.593 "uuid": "890ddf75-f6d8-4e91-8f66-904eca07304e", 00:11:27.593 "is_configured": true, 00:11:27.593 "data_offset": 0, 00:11:27.593 "data_size": 65536 00:11:27.593 }, 00:11:27.593 { 00:11:27.593 "name": "BaseBdev4", 00:11:27.593 "uuid": "9cd83e5b-0985-4e8d-8b1a-d84e0507d9c3", 00:11:27.593 "is_configured": true, 00:11:27.593 "data_offset": 0, 00:11:27.593 "data_size": 65536 00:11:27.593 } 00:11:27.593 ] 00:11:27.593 }' 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.593 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.854 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.854 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.854 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:27.854 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.113 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.113 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.114 [2024-11-25 15:38:26.574075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.114 "name": "Existed_Raid", 00:11:28.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.114 "strip_size_kb": 64, 00:11:28.114 "state": "configuring", 00:11:28.114 "raid_level": "concat", 00:11:28.114 "superblock": false, 00:11:28.114 "num_base_bdevs": 4, 00:11:28.114 "num_base_bdevs_discovered": 2, 00:11:28.114 "num_base_bdevs_operational": 4, 00:11:28.114 "base_bdevs_list": [ 00:11:28.114 { 00:11:28.114 "name": "BaseBdev1", 00:11:28.114 "uuid": "68720ed9-d7f2-41ec-b8a0-f773954f3740", 00:11:28.114 "is_configured": true, 00:11:28.114 "data_offset": 0, 00:11:28.114 "data_size": 65536 00:11:28.114 }, 00:11:28.114 { 00:11:28.114 "name": null, 00:11:28.114 "uuid": "ebaadbd1-06f8-4821-8712-8b7b37a2ab02", 00:11:28.114 "is_configured": false, 00:11:28.114 "data_offset": 0, 00:11:28.114 "data_size": 65536 00:11:28.114 }, 00:11:28.114 { 00:11:28.114 "name": null, 00:11:28.114 "uuid": "890ddf75-f6d8-4e91-8f66-904eca07304e", 00:11:28.114 "is_configured": false, 00:11:28.114 "data_offset": 0, 00:11:28.114 "data_size": 65536 00:11:28.114 }, 00:11:28.114 { 00:11:28.114 "name": "BaseBdev4", 00:11:28.114 "uuid": "9cd83e5b-0985-4e8d-8b1a-d84e0507d9c3", 00:11:28.114 "is_configured": true, 00:11:28.114 "data_offset": 0, 00:11:28.114 "data_size": 65536 00:11:28.114 } 00:11:28.114 ] 00:11:28.114 }' 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.114 15:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.373 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.373 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:28.374 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.374 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.374 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.633 [2024-11-25 15:38:27.069207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.633 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.633 "name": "Existed_Raid", 00:11:28.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.633 "strip_size_kb": 64, 00:11:28.634 "state": "configuring", 00:11:28.634 "raid_level": "concat", 00:11:28.634 "superblock": false, 00:11:28.634 "num_base_bdevs": 4, 00:11:28.634 "num_base_bdevs_discovered": 3, 00:11:28.634 "num_base_bdevs_operational": 4, 00:11:28.634 "base_bdevs_list": [ 00:11:28.634 { 00:11:28.634 "name": "BaseBdev1", 00:11:28.634 "uuid": "68720ed9-d7f2-41ec-b8a0-f773954f3740", 00:11:28.634 "is_configured": true, 00:11:28.634 "data_offset": 0, 00:11:28.634 "data_size": 65536 00:11:28.634 }, 00:11:28.634 { 00:11:28.634 "name": null, 00:11:28.634 "uuid": "ebaadbd1-06f8-4821-8712-8b7b37a2ab02", 00:11:28.634 "is_configured": false, 00:11:28.634 "data_offset": 0, 00:11:28.634 "data_size": 65536 00:11:28.634 }, 00:11:28.634 { 00:11:28.634 "name": "BaseBdev3", 00:11:28.634 "uuid": "890ddf75-f6d8-4e91-8f66-904eca07304e", 00:11:28.634 "is_configured": true, 00:11:28.634 "data_offset": 0, 00:11:28.634 "data_size": 65536 00:11:28.634 }, 00:11:28.634 { 00:11:28.634 "name": "BaseBdev4", 00:11:28.634 "uuid": "9cd83e5b-0985-4e8d-8b1a-d84e0507d9c3", 00:11:28.634 "is_configured": true, 00:11:28.634 "data_offset": 0, 00:11:28.634 "data_size": 65536 00:11:28.634 } 00:11:28.634 ] 00:11:28.634 }' 00:11:28.634 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.634 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.893 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.893 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:28.893 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.893 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.893 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.893 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:28.893 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:28.893 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.893 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.893 [2024-11-25 15:38:27.560390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:29.154 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.154 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:29.154 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.154 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.154 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.154 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.154 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.154 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.154 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.154 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.154 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.154 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.154 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.154 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.154 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.154 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.154 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.154 "name": "Existed_Raid", 00:11:29.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.154 "strip_size_kb": 64, 00:11:29.154 "state": "configuring", 00:11:29.154 "raid_level": "concat", 00:11:29.154 "superblock": false, 00:11:29.154 "num_base_bdevs": 4, 00:11:29.154 "num_base_bdevs_discovered": 2, 00:11:29.154 "num_base_bdevs_operational": 4, 00:11:29.154 "base_bdevs_list": [ 00:11:29.154 { 00:11:29.154 "name": null, 00:11:29.154 "uuid": "68720ed9-d7f2-41ec-b8a0-f773954f3740", 00:11:29.154 "is_configured": false, 00:11:29.154 "data_offset": 0, 00:11:29.154 "data_size": 65536 00:11:29.154 }, 00:11:29.154 { 00:11:29.154 "name": null, 00:11:29.154 "uuid": "ebaadbd1-06f8-4821-8712-8b7b37a2ab02", 00:11:29.154 "is_configured": false, 00:11:29.154 "data_offset": 0, 00:11:29.154 "data_size": 65536 00:11:29.154 }, 00:11:29.154 { 00:11:29.154 "name": "BaseBdev3", 00:11:29.154 "uuid": "890ddf75-f6d8-4e91-8f66-904eca07304e", 00:11:29.154 "is_configured": true, 00:11:29.154 "data_offset": 0, 00:11:29.154 "data_size": 65536 00:11:29.154 }, 00:11:29.154 { 00:11:29.154 "name": "BaseBdev4", 00:11:29.154 "uuid": "9cd83e5b-0985-4e8d-8b1a-d84e0507d9c3", 00:11:29.154 "is_configured": true, 00:11:29.154 "data_offset": 0, 00:11:29.154 "data_size": 65536 00:11:29.154 } 00:11:29.154 ] 00:11:29.154 }' 00:11:29.154 15:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.154 15:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.414 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.414 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:29.414 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.414 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.674 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.674 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:29.674 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:29.674 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.674 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.674 [2024-11-25 15:38:28.128656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.674 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.674 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:29.674 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.674 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.674 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.674 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.674 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.674 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.674 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.674 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.674 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.674 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.675 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.675 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.675 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.675 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.675 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.675 "name": "Existed_Raid", 00:11:29.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.675 "strip_size_kb": 64, 00:11:29.675 "state": "configuring", 00:11:29.675 "raid_level": "concat", 00:11:29.675 "superblock": false, 00:11:29.675 "num_base_bdevs": 4, 00:11:29.675 "num_base_bdevs_discovered": 3, 00:11:29.675 "num_base_bdevs_operational": 4, 00:11:29.675 "base_bdevs_list": [ 00:11:29.675 { 00:11:29.675 "name": null, 00:11:29.675 "uuid": "68720ed9-d7f2-41ec-b8a0-f773954f3740", 00:11:29.675 "is_configured": false, 00:11:29.675 "data_offset": 0, 00:11:29.675 "data_size": 65536 00:11:29.675 }, 00:11:29.675 { 00:11:29.675 "name": "BaseBdev2", 00:11:29.675 "uuid": "ebaadbd1-06f8-4821-8712-8b7b37a2ab02", 00:11:29.675 "is_configured": true, 00:11:29.675 "data_offset": 0, 00:11:29.675 "data_size": 65536 00:11:29.675 }, 00:11:29.675 { 00:11:29.675 "name": "BaseBdev3", 00:11:29.675 "uuid": "890ddf75-f6d8-4e91-8f66-904eca07304e", 00:11:29.675 "is_configured": true, 00:11:29.675 "data_offset": 0, 00:11:29.675 "data_size": 65536 00:11:29.675 }, 00:11:29.675 { 00:11:29.675 "name": "BaseBdev4", 00:11:29.675 "uuid": "9cd83e5b-0985-4e8d-8b1a-d84e0507d9c3", 00:11:29.675 "is_configured": true, 00:11:29.675 "data_offset": 0, 00:11:29.675 "data_size": 65536 00:11:29.675 } 00:11:29.675 ] 00:11:29.675 }' 00:11:29.675 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.675 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.935 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.935 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.935 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.935 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:29.935 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.935 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:29.935 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.935 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:29.935 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.935 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 68720ed9-d7f2-41ec-b8a0-f773954f3740 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.195 [2024-11-25 15:38:28.689822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:30.195 [2024-11-25 15:38:28.689925] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:30.195 [2024-11-25 15:38:28.689951] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:30.195 [2024-11-25 15:38:28.690241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:30.195 [2024-11-25 15:38:28.690439] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:30.195 [2024-11-25 15:38:28.690485] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:30.195 [2024-11-25 15:38:28.690778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.195 NewBaseBdev 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.195 [ 00:11:30.195 { 00:11:30.195 "name": "NewBaseBdev", 00:11:30.195 "aliases": [ 00:11:30.195 "68720ed9-d7f2-41ec-b8a0-f773954f3740" 00:11:30.195 ], 00:11:30.195 "product_name": "Malloc disk", 00:11:30.195 "block_size": 512, 00:11:30.195 "num_blocks": 65536, 00:11:30.195 "uuid": "68720ed9-d7f2-41ec-b8a0-f773954f3740", 00:11:30.195 "assigned_rate_limits": { 00:11:30.195 "rw_ios_per_sec": 0, 00:11:30.195 "rw_mbytes_per_sec": 0, 00:11:30.195 "r_mbytes_per_sec": 0, 00:11:30.195 "w_mbytes_per_sec": 0 00:11:30.195 }, 00:11:30.195 "claimed": true, 00:11:30.195 "claim_type": "exclusive_write", 00:11:30.195 "zoned": false, 00:11:30.195 "supported_io_types": { 00:11:30.195 "read": true, 00:11:30.195 "write": true, 00:11:30.195 "unmap": true, 00:11:30.195 "flush": true, 00:11:30.195 "reset": true, 00:11:30.195 "nvme_admin": false, 00:11:30.195 "nvme_io": false, 00:11:30.195 "nvme_io_md": false, 00:11:30.195 "write_zeroes": true, 00:11:30.195 "zcopy": true, 00:11:30.195 "get_zone_info": false, 00:11:30.195 "zone_management": false, 00:11:30.195 "zone_append": false, 00:11:30.195 "compare": false, 00:11:30.195 "compare_and_write": false, 00:11:30.195 "abort": true, 00:11:30.195 "seek_hole": false, 00:11:30.195 "seek_data": false, 00:11:30.195 "copy": true, 00:11:30.195 "nvme_iov_md": false 00:11:30.195 }, 00:11:30.195 "memory_domains": [ 00:11:30.195 { 00:11:30.195 "dma_device_id": "system", 00:11:30.195 "dma_device_type": 1 00:11:30.195 }, 00:11:30.195 { 00:11:30.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.195 "dma_device_type": 2 00:11:30.195 } 00:11:30.195 ], 00:11:30.195 "driver_specific": {} 00:11:30.195 } 00:11:30.195 ] 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.195 "name": "Existed_Raid", 00:11:30.195 "uuid": "f983f2f7-9bbb-4527-bb48-402e84e54df7", 00:11:30.195 "strip_size_kb": 64, 00:11:30.195 "state": "online", 00:11:30.195 "raid_level": "concat", 00:11:30.195 "superblock": false, 00:11:30.195 "num_base_bdevs": 4, 00:11:30.195 "num_base_bdevs_discovered": 4, 00:11:30.195 "num_base_bdevs_operational": 4, 00:11:30.195 "base_bdevs_list": [ 00:11:30.195 { 00:11:30.195 "name": "NewBaseBdev", 00:11:30.195 "uuid": "68720ed9-d7f2-41ec-b8a0-f773954f3740", 00:11:30.195 "is_configured": true, 00:11:30.195 "data_offset": 0, 00:11:30.195 "data_size": 65536 00:11:30.195 }, 00:11:30.195 { 00:11:30.195 "name": "BaseBdev2", 00:11:30.195 "uuid": "ebaadbd1-06f8-4821-8712-8b7b37a2ab02", 00:11:30.195 "is_configured": true, 00:11:30.195 "data_offset": 0, 00:11:30.195 "data_size": 65536 00:11:30.195 }, 00:11:30.195 { 00:11:30.195 "name": "BaseBdev3", 00:11:30.195 "uuid": "890ddf75-f6d8-4e91-8f66-904eca07304e", 00:11:30.195 "is_configured": true, 00:11:30.195 "data_offset": 0, 00:11:30.195 "data_size": 65536 00:11:30.195 }, 00:11:30.195 { 00:11:30.195 "name": "BaseBdev4", 00:11:30.195 "uuid": "9cd83e5b-0985-4e8d-8b1a-d84e0507d9c3", 00:11:30.195 "is_configured": true, 00:11:30.195 "data_offset": 0, 00:11:30.195 "data_size": 65536 00:11:30.195 } 00:11:30.195 ] 00:11:30.195 }' 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.195 15:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.765 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:30.765 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:30.765 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.766 [2024-11-25 15:38:29.169440] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:30.766 "name": "Existed_Raid", 00:11:30.766 "aliases": [ 00:11:30.766 "f983f2f7-9bbb-4527-bb48-402e84e54df7" 00:11:30.766 ], 00:11:30.766 "product_name": "Raid Volume", 00:11:30.766 "block_size": 512, 00:11:30.766 "num_blocks": 262144, 00:11:30.766 "uuid": "f983f2f7-9bbb-4527-bb48-402e84e54df7", 00:11:30.766 "assigned_rate_limits": { 00:11:30.766 "rw_ios_per_sec": 0, 00:11:30.766 "rw_mbytes_per_sec": 0, 00:11:30.766 "r_mbytes_per_sec": 0, 00:11:30.766 "w_mbytes_per_sec": 0 00:11:30.766 }, 00:11:30.766 "claimed": false, 00:11:30.766 "zoned": false, 00:11:30.766 "supported_io_types": { 00:11:30.766 "read": true, 00:11:30.766 "write": true, 00:11:30.766 "unmap": true, 00:11:30.766 "flush": true, 00:11:30.766 "reset": true, 00:11:30.766 "nvme_admin": false, 00:11:30.766 "nvme_io": false, 00:11:30.766 "nvme_io_md": false, 00:11:30.766 "write_zeroes": true, 00:11:30.766 "zcopy": false, 00:11:30.766 "get_zone_info": false, 00:11:30.766 "zone_management": false, 00:11:30.766 "zone_append": false, 00:11:30.766 "compare": false, 00:11:30.766 "compare_and_write": false, 00:11:30.766 "abort": false, 00:11:30.766 "seek_hole": false, 00:11:30.766 "seek_data": false, 00:11:30.766 "copy": false, 00:11:30.766 "nvme_iov_md": false 00:11:30.766 }, 00:11:30.766 "memory_domains": [ 00:11:30.766 { 00:11:30.766 "dma_device_id": "system", 00:11:30.766 "dma_device_type": 1 00:11:30.766 }, 00:11:30.766 { 00:11:30.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.766 "dma_device_type": 2 00:11:30.766 }, 00:11:30.766 { 00:11:30.766 "dma_device_id": "system", 00:11:30.766 "dma_device_type": 1 00:11:30.766 }, 00:11:30.766 { 00:11:30.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.766 "dma_device_type": 2 00:11:30.766 }, 00:11:30.766 { 00:11:30.766 "dma_device_id": "system", 00:11:30.766 "dma_device_type": 1 00:11:30.766 }, 00:11:30.766 { 00:11:30.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.766 "dma_device_type": 2 00:11:30.766 }, 00:11:30.766 { 00:11:30.766 "dma_device_id": "system", 00:11:30.766 "dma_device_type": 1 00:11:30.766 }, 00:11:30.766 { 00:11:30.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.766 "dma_device_type": 2 00:11:30.766 } 00:11:30.766 ], 00:11:30.766 "driver_specific": { 00:11:30.766 "raid": { 00:11:30.766 "uuid": "f983f2f7-9bbb-4527-bb48-402e84e54df7", 00:11:30.766 "strip_size_kb": 64, 00:11:30.766 "state": "online", 00:11:30.766 "raid_level": "concat", 00:11:30.766 "superblock": false, 00:11:30.766 "num_base_bdevs": 4, 00:11:30.766 "num_base_bdevs_discovered": 4, 00:11:30.766 "num_base_bdevs_operational": 4, 00:11:30.766 "base_bdevs_list": [ 00:11:30.766 { 00:11:30.766 "name": "NewBaseBdev", 00:11:30.766 "uuid": "68720ed9-d7f2-41ec-b8a0-f773954f3740", 00:11:30.766 "is_configured": true, 00:11:30.766 "data_offset": 0, 00:11:30.766 "data_size": 65536 00:11:30.766 }, 00:11:30.766 { 00:11:30.766 "name": "BaseBdev2", 00:11:30.766 "uuid": "ebaadbd1-06f8-4821-8712-8b7b37a2ab02", 00:11:30.766 "is_configured": true, 00:11:30.766 "data_offset": 0, 00:11:30.766 "data_size": 65536 00:11:30.766 }, 00:11:30.766 { 00:11:30.766 "name": "BaseBdev3", 00:11:30.766 "uuid": "890ddf75-f6d8-4e91-8f66-904eca07304e", 00:11:30.766 "is_configured": true, 00:11:30.766 "data_offset": 0, 00:11:30.766 "data_size": 65536 00:11:30.766 }, 00:11:30.766 { 00:11:30.766 "name": "BaseBdev4", 00:11:30.766 "uuid": "9cd83e5b-0985-4e8d-8b1a-d84e0507d9c3", 00:11:30.766 "is_configured": true, 00:11:30.766 "data_offset": 0, 00:11:30.766 "data_size": 65536 00:11:30.766 } 00:11:30.766 ] 00:11:30.766 } 00:11:30.766 } 00:11:30.766 }' 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:30.766 BaseBdev2 00:11:30.766 BaseBdev3 00:11:30.766 BaseBdev4' 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.766 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.026 [2024-11-25 15:38:29.508469] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:31.026 [2024-11-25 15:38:29.508500] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:31.026 [2024-11-25 15:38:29.508575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:31.026 [2024-11-25 15:38:29.508641] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:31.026 [2024-11-25 15:38:29.508650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71018 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71018 ']' 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71018 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71018 00:11:31.026 killing process with pid 71018 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.026 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.027 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71018' 00:11:31.027 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71018 00:11:31.027 [2024-11-25 15:38:29.555619] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:31.027 15:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71018 00:11:31.286 [2024-11-25 15:38:29.945673] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:32.668 ************************************ 00:11:32.668 END TEST raid_state_function_test 00:11:32.668 ************************************ 00:11:32.668 00:11:32.668 real 0m11.349s 00:11:32.668 user 0m18.140s 00:11:32.668 sys 0m1.970s 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.668 15:38:31 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:32.668 15:38:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:32.668 15:38:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.668 15:38:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.668 ************************************ 00:11:32.668 START TEST raid_state_function_test_sb 00:11:32.668 ************************************ 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71689 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71689' 00:11:32.668 Process raid pid: 71689 00:11:32.668 15:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71689 00:11:32.669 15:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71689 ']' 00:11:32.669 15:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.669 15:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.669 15:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.669 15:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.669 15:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.669 [2024-11-25 15:38:31.197189] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:11:32.669 [2024-11-25 15:38:31.197383] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.928 [2024-11-25 15:38:31.370235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.928 [2024-11-25 15:38:31.481201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.189 [2024-11-25 15:38:31.683669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.189 [2024-11-25 15:38:31.683766] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.450 [2024-11-25 15:38:32.038399] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.450 [2024-11-25 15:38:32.038449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.450 [2024-11-25 15:38:32.038460] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.450 [2024-11-25 15:38:32.038486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.450 [2024-11-25 15:38:32.038493] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.450 [2024-11-25 15:38:32.038502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.450 [2024-11-25 15:38:32.038508] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:33.450 [2024-11-25 15:38:32.038516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.450 "name": "Existed_Raid", 00:11:33.450 "uuid": "d56bc403-d895-4597-a823-9483aaaf4a31", 00:11:33.450 "strip_size_kb": 64, 00:11:33.450 "state": "configuring", 00:11:33.450 "raid_level": "concat", 00:11:33.450 "superblock": true, 00:11:33.450 "num_base_bdevs": 4, 00:11:33.450 "num_base_bdevs_discovered": 0, 00:11:33.450 "num_base_bdevs_operational": 4, 00:11:33.450 "base_bdevs_list": [ 00:11:33.450 { 00:11:33.450 "name": "BaseBdev1", 00:11:33.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.450 "is_configured": false, 00:11:33.450 "data_offset": 0, 00:11:33.450 "data_size": 0 00:11:33.450 }, 00:11:33.450 { 00:11:33.450 "name": "BaseBdev2", 00:11:33.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.450 "is_configured": false, 00:11:33.450 "data_offset": 0, 00:11:33.450 "data_size": 0 00:11:33.450 }, 00:11:33.450 { 00:11:33.450 "name": "BaseBdev3", 00:11:33.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.450 "is_configured": false, 00:11:33.450 "data_offset": 0, 00:11:33.450 "data_size": 0 00:11:33.450 }, 00:11:33.450 { 00:11:33.450 "name": "BaseBdev4", 00:11:33.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.450 "is_configured": false, 00:11:33.450 "data_offset": 0, 00:11:33.450 "data_size": 0 00:11:33.450 } 00:11:33.450 ] 00:11:33.450 }' 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.450 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.019 [2024-11-25 15:38:32.449630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.019 [2024-11-25 15:38:32.449733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.019 [2024-11-25 15:38:32.457638] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:34.019 [2024-11-25 15:38:32.457740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:34.019 [2024-11-25 15:38:32.457768] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.019 [2024-11-25 15:38:32.457931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.019 [2024-11-25 15:38:32.457960] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.019 [2024-11-25 15:38:32.457983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.019 [2024-11-25 15:38:32.458053] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:34.019 [2024-11-25 15:38:32.458090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.019 [2024-11-25 15:38:32.499977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.019 BaseBdev1 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.019 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.019 [ 00:11:34.019 { 00:11:34.019 "name": "BaseBdev1", 00:11:34.019 "aliases": [ 00:11:34.019 "75dfd880-4482-4dd7-82e0-f3d68671adce" 00:11:34.019 ], 00:11:34.019 "product_name": "Malloc disk", 00:11:34.019 "block_size": 512, 00:11:34.020 "num_blocks": 65536, 00:11:34.020 "uuid": "75dfd880-4482-4dd7-82e0-f3d68671adce", 00:11:34.020 "assigned_rate_limits": { 00:11:34.020 "rw_ios_per_sec": 0, 00:11:34.020 "rw_mbytes_per_sec": 0, 00:11:34.020 "r_mbytes_per_sec": 0, 00:11:34.020 "w_mbytes_per_sec": 0 00:11:34.020 }, 00:11:34.020 "claimed": true, 00:11:34.020 "claim_type": "exclusive_write", 00:11:34.020 "zoned": false, 00:11:34.020 "supported_io_types": { 00:11:34.020 "read": true, 00:11:34.020 "write": true, 00:11:34.020 "unmap": true, 00:11:34.020 "flush": true, 00:11:34.020 "reset": true, 00:11:34.020 "nvme_admin": false, 00:11:34.020 "nvme_io": false, 00:11:34.020 "nvme_io_md": false, 00:11:34.020 "write_zeroes": true, 00:11:34.020 "zcopy": true, 00:11:34.020 "get_zone_info": false, 00:11:34.020 "zone_management": false, 00:11:34.020 "zone_append": false, 00:11:34.020 "compare": false, 00:11:34.020 "compare_and_write": false, 00:11:34.020 "abort": true, 00:11:34.020 "seek_hole": false, 00:11:34.020 "seek_data": false, 00:11:34.020 "copy": true, 00:11:34.020 "nvme_iov_md": false 00:11:34.020 }, 00:11:34.020 "memory_domains": [ 00:11:34.020 { 00:11:34.020 "dma_device_id": "system", 00:11:34.020 "dma_device_type": 1 00:11:34.020 }, 00:11:34.020 { 00:11:34.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.020 "dma_device_type": 2 00:11:34.020 } 00:11:34.020 ], 00:11:34.020 "driver_specific": {} 00:11:34.020 } 00:11:34.020 ] 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.020 "name": "Existed_Raid", 00:11:34.020 "uuid": "6fee0122-d336-43a6-8b78-b0c4e09d2ad3", 00:11:34.020 "strip_size_kb": 64, 00:11:34.020 "state": "configuring", 00:11:34.020 "raid_level": "concat", 00:11:34.020 "superblock": true, 00:11:34.020 "num_base_bdevs": 4, 00:11:34.020 "num_base_bdevs_discovered": 1, 00:11:34.020 "num_base_bdevs_operational": 4, 00:11:34.020 "base_bdevs_list": [ 00:11:34.020 { 00:11:34.020 "name": "BaseBdev1", 00:11:34.020 "uuid": "75dfd880-4482-4dd7-82e0-f3d68671adce", 00:11:34.020 "is_configured": true, 00:11:34.020 "data_offset": 2048, 00:11:34.020 "data_size": 63488 00:11:34.020 }, 00:11:34.020 { 00:11:34.020 "name": "BaseBdev2", 00:11:34.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.020 "is_configured": false, 00:11:34.020 "data_offset": 0, 00:11:34.020 "data_size": 0 00:11:34.020 }, 00:11:34.020 { 00:11:34.020 "name": "BaseBdev3", 00:11:34.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.020 "is_configured": false, 00:11:34.020 "data_offset": 0, 00:11:34.020 "data_size": 0 00:11:34.020 }, 00:11:34.020 { 00:11:34.020 "name": "BaseBdev4", 00:11:34.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.020 "is_configured": false, 00:11:34.020 "data_offset": 0, 00:11:34.020 "data_size": 0 00:11:34.020 } 00:11:34.020 ] 00:11:34.020 }' 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.020 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.588 [2024-11-25 15:38:32.975209] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.588 [2024-11-25 15:38:32.975263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.588 [2024-11-25 15:38:32.987274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.588 [2024-11-25 15:38:32.989139] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.588 [2024-11-25 15:38:32.989182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.588 [2024-11-25 15:38:32.989192] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.588 [2024-11-25 15:38:32.989203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.588 [2024-11-25 15:38:32.989210] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:34.588 [2024-11-25 15:38:32.989219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.588 15:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.588 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.588 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.588 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.588 "name": "Existed_Raid", 00:11:34.588 "uuid": "4a8addd3-191d-4606-84c2-92d4abc8345f", 00:11:34.588 "strip_size_kb": 64, 00:11:34.588 "state": "configuring", 00:11:34.588 "raid_level": "concat", 00:11:34.588 "superblock": true, 00:11:34.588 "num_base_bdevs": 4, 00:11:34.588 "num_base_bdevs_discovered": 1, 00:11:34.588 "num_base_bdevs_operational": 4, 00:11:34.588 "base_bdevs_list": [ 00:11:34.588 { 00:11:34.588 "name": "BaseBdev1", 00:11:34.588 "uuid": "75dfd880-4482-4dd7-82e0-f3d68671adce", 00:11:34.588 "is_configured": true, 00:11:34.588 "data_offset": 2048, 00:11:34.588 "data_size": 63488 00:11:34.588 }, 00:11:34.588 { 00:11:34.588 "name": "BaseBdev2", 00:11:34.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.589 "is_configured": false, 00:11:34.589 "data_offset": 0, 00:11:34.589 "data_size": 0 00:11:34.589 }, 00:11:34.589 { 00:11:34.589 "name": "BaseBdev3", 00:11:34.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.589 "is_configured": false, 00:11:34.589 "data_offset": 0, 00:11:34.589 "data_size": 0 00:11:34.589 }, 00:11:34.589 { 00:11:34.589 "name": "BaseBdev4", 00:11:34.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.589 "is_configured": false, 00:11:34.589 "data_offset": 0, 00:11:34.589 "data_size": 0 00:11:34.589 } 00:11:34.589 ] 00:11:34.589 }' 00:11:34.589 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.589 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.848 [2024-11-25 15:38:33.459613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:34.848 BaseBdev2 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.848 [ 00:11:34.848 { 00:11:34.848 "name": "BaseBdev2", 00:11:34.848 "aliases": [ 00:11:34.848 "d4c23fef-7908-421d-8935-3504d7b8aadf" 00:11:34.848 ], 00:11:34.848 "product_name": "Malloc disk", 00:11:34.848 "block_size": 512, 00:11:34.848 "num_blocks": 65536, 00:11:34.848 "uuid": "d4c23fef-7908-421d-8935-3504d7b8aadf", 00:11:34.848 "assigned_rate_limits": { 00:11:34.848 "rw_ios_per_sec": 0, 00:11:34.848 "rw_mbytes_per_sec": 0, 00:11:34.848 "r_mbytes_per_sec": 0, 00:11:34.848 "w_mbytes_per_sec": 0 00:11:34.848 }, 00:11:34.848 "claimed": true, 00:11:34.848 "claim_type": "exclusive_write", 00:11:34.848 "zoned": false, 00:11:34.848 "supported_io_types": { 00:11:34.848 "read": true, 00:11:34.848 "write": true, 00:11:34.848 "unmap": true, 00:11:34.848 "flush": true, 00:11:34.848 "reset": true, 00:11:34.848 "nvme_admin": false, 00:11:34.848 "nvme_io": false, 00:11:34.848 "nvme_io_md": false, 00:11:34.848 "write_zeroes": true, 00:11:34.848 "zcopy": true, 00:11:34.848 "get_zone_info": false, 00:11:34.848 "zone_management": false, 00:11:34.848 "zone_append": false, 00:11:34.848 "compare": false, 00:11:34.848 "compare_and_write": false, 00:11:34.848 "abort": true, 00:11:34.848 "seek_hole": false, 00:11:34.848 "seek_data": false, 00:11:34.848 "copy": true, 00:11:34.848 "nvme_iov_md": false 00:11:34.848 }, 00:11:34.848 "memory_domains": [ 00:11:34.848 { 00:11:34.848 "dma_device_id": "system", 00:11:34.848 "dma_device_type": 1 00:11:34.848 }, 00:11:34.848 { 00:11:34.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.848 "dma_device_type": 2 00:11:34.848 } 00:11:34.848 ], 00:11:34.848 "driver_specific": {} 00:11:34.848 } 00:11:34.848 ] 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.848 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.107 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.107 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.107 "name": "Existed_Raid", 00:11:35.107 "uuid": "4a8addd3-191d-4606-84c2-92d4abc8345f", 00:11:35.107 "strip_size_kb": 64, 00:11:35.107 "state": "configuring", 00:11:35.107 "raid_level": "concat", 00:11:35.107 "superblock": true, 00:11:35.107 "num_base_bdevs": 4, 00:11:35.107 "num_base_bdevs_discovered": 2, 00:11:35.107 "num_base_bdevs_operational": 4, 00:11:35.107 "base_bdevs_list": [ 00:11:35.107 { 00:11:35.107 "name": "BaseBdev1", 00:11:35.107 "uuid": "75dfd880-4482-4dd7-82e0-f3d68671adce", 00:11:35.107 "is_configured": true, 00:11:35.107 "data_offset": 2048, 00:11:35.107 "data_size": 63488 00:11:35.107 }, 00:11:35.107 { 00:11:35.107 "name": "BaseBdev2", 00:11:35.107 "uuid": "d4c23fef-7908-421d-8935-3504d7b8aadf", 00:11:35.107 "is_configured": true, 00:11:35.107 "data_offset": 2048, 00:11:35.107 "data_size": 63488 00:11:35.107 }, 00:11:35.107 { 00:11:35.107 "name": "BaseBdev3", 00:11:35.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.107 "is_configured": false, 00:11:35.107 "data_offset": 0, 00:11:35.107 "data_size": 0 00:11:35.107 }, 00:11:35.107 { 00:11:35.107 "name": "BaseBdev4", 00:11:35.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.107 "is_configured": false, 00:11:35.107 "data_offset": 0, 00:11:35.107 "data_size": 0 00:11:35.107 } 00:11:35.107 ] 00:11:35.107 }' 00:11:35.107 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.107 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.366 15:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:35.366 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.366 15:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.366 [2024-11-25 15:38:34.017739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.366 BaseBdev3 00:11:35.366 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.366 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:35.366 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:35.366 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.366 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.366 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.366 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.366 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.366 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.366 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.366 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.366 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:35.366 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.366 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.366 [ 00:11:35.366 { 00:11:35.366 "name": "BaseBdev3", 00:11:35.366 "aliases": [ 00:11:35.366 "83440d11-53c7-4abc-810f-f5f1f18a9b2b" 00:11:35.366 ], 00:11:35.366 "product_name": "Malloc disk", 00:11:35.366 "block_size": 512, 00:11:35.366 "num_blocks": 65536, 00:11:35.366 "uuid": "83440d11-53c7-4abc-810f-f5f1f18a9b2b", 00:11:35.366 "assigned_rate_limits": { 00:11:35.366 "rw_ios_per_sec": 0, 00:11:35.366 "rw_mbytes_per_sec": 0, 00:11:35.366 "r_mbytes_per_sec": 0, 00:11:35.625 "w_mbytes_per_sec": 0 00:11:35.625 }, 00:11:35.625 "claimed": true, 00:11:35.625 "claim_type": "exclusive_write", 00:11:35.625 "zoned": false, 00:11:35.625 "supported_io_types": { 00:11:35.625 "read": true, 00:11:35.625 "write": true, 00:11:35.625 "unmap": true, 00:11:35.625 "flush": true, 00:11:35.625 "reset": true, 00:11:35.625 "nvme_admin": false, 00:11:35.625 "nvme_io": false, 00:11:35.625 "nvme_io_md": false, 00:11:35.625 "write_zeroes": true, 00:11:35.625 "zcopy": true, 00:11:35.625 "get_zone_info": false, 00:11:35.625 "zone_management": false, 00:11:35.625 "zone_append": false, 00:11:35.625 "compare": false, 00:11:35.625 "compare_and_write": false, 00:11:35.625 "abort": true, 00:11:35.625 "seek_hole": false, 00:11:35.625 "seek_data": false, 00:11:35.625 "copy": true, 00:11:35.625 "nvme_iov_md": false 00:11:35.625 }, 00:11:35.625 "memory_domains": [ 00:11:35.625 { 00:11:35.625 "dma_device_id": "system", 00:11:35.625 "dma_device_type": 1 00:11:35.625 }, 00:11:35.625 { 00:11:35.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.625 "dma_device_type": 2 00:11:35.625 } 00:11:35.625 ], 00:11:35.625 "driver_specific": {} 00:11:35.625 } 00:11:35.625 ] 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.625 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.625 "name": "Existed_Raid", 00:11:35.625 "uuid": "4a8addd3-191d-4606-84c2-92d4abc8345f", 00:11:35.625 "strip_size_kb": 64, 00:11:35.625 "state": "configuring", 00:11:35.625 "raid_level": "concat", 00:11:35.625 "superblock": true, 00:11:35.625 "num_base_bdevs": 4, 00:11:35.625 "num_base_bdevs_discovered": 3, 00:11:35.625 "num_base_bdevs_operational": 4, 00:11:35.625 "base_bdevs_list": [ 00:11:35.625 { 00:11:35.625 "name": "BaseBdev1", 00:11:35.625 "uuid": "75dfd880-4482-4dd7-82e0-f3d68671adce", 00:11:35.625 "is_configured": true, 00:11:35.625 "data_offset": 2048, 00:11:35.625 "data_size": 63488 00:11:35.625 }, 00:11:35.626 { 00:11:35.626 "name": "BaseBdev2", 00:11:35.626 "uuid": "d4c23fef-7908-421d-8935-3504d7b8aadf", 00:11:35.626 "is_configured": true, 00:11:35.626 "data_offset": 2048, 00:11:35.626 "data_size": 63488 00:11:35.626 }, 00:11:35.626 { 00:11:35.626 "name": "BaseBdev3", 00:11:35.626 "uuid": "83440d11-53c7-4abc-810f-f5f1f18a9b2b", 00:11:35.626 "is_configured": true, 00:11:35.626 "data_offset": 2048, 00:11:35.626 "data_size": 63488 00:11:35.626 }, 00:11:35.626 { 00:11:35.626 "name": "BaseBdev4", 00:11:35.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.626 "is_configured": false, 00:11:35.626 "data_offset": 0, 00:11:35.626 "data_size": 0 00:11:35.626 } 00:11:35.626 ] 00:11:35.626 }' 00:11:35.626 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.626 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.885 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:35.885 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.885 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.885 BaseBdev4 00:11:35.885 [2024-11-25 15:38:34.480571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:35.885 [2024-11-25 15:38:34.480851] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:35.885 [2024-11-25 15:38:34.480868] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:35.885 [2024-11-25 15:38:34.481164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:35.885 [2024-11-25 15:38:34.481323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:35.885 [2024-11-25 15:38:34.481336] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:35.885 [2024-11-25 15:38:34.481468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.885 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.885 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:35.885 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:35.885 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.885 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.885 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.885 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.885 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.885 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.885 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.885 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.886 [ 00:11:35.886 { 00:11:35.886 "name": "BaseBdev4", 00:11:35.886 "aliases": [ 00:11:35.886 "e7e7a94a-6c7f-4e23-91ae-a0b684b13c7f" 00:11:35.886 ], 00:11:35.886 "product_name": "Malloc disk", 00:11:35.886 "block_size": 512, 00:11:35.886 "num_blocks": 65536, 00:11:35.886 "uuid": "e7e7a94a-6c7f-4e23-91ae-a0b684b13c7f", 00:11:35.886 "assigned_rate_limits": { 00:11:35.886 "rw_ios_per_sec": 0, 00:11:35.886 "rw_mbytes_per_sec": 0, 00:11:35.886 "r_mbytes_per_sec": 0, 00:11:35.886 "w_mbytes_per_sec": 0 00:11:35.886 }, 00:11:35.886 "claimed": true, 00:11:35.886 "claim_type": "exclusive_write", 00:11:35.886 "zoned": false, 00:11:35.886 "supported_io_types": { 00:11:35.886 "read": true, 00:11:35.886 "write": true, 00:11:35.886 "unmap": true, 00:11:35.886 "flush": true, 00:11:35.886 "reset": true, 00:11:35.886 "nvme_admin": false, 00:11:35.886 "nvme_io": false, 00:11:35.886 "nvme_io_md": false, 00:11:35.886 "write_zeroes": true, 00:11:35.886 "zcopy": true, 00:11:35.886 "get_zone_info": false, 00:11:35.886 "zone_management": false, 00:11:35.886 "zone_append": false, 00:11:35.886 "compare": false, 00:11:35.886 "compare_and_write": false, 00:11:35.886 "abort": true, 00:11:35.886 "seek_hole": false, 00:11:35.886 "seek_data": false, 00:11:35.886 "copy": true, 00:11:35.886 "nvme_iov_md": false 00:11:35.886 }, 00:11:35.886 "memory_domains": [ 00:11:35.886 { 00:11:35.886 "dma_device_id": "system", 00:11:35.886 "dma_device_type": 1 00:11:35.886 }, 00:11:35.886 { 00:11:35.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.886 "dma_device_type": 2 00:11:35.886 } 00:11:35.886 ], 00:11:35.886 "driver_specific": {} 00:11:35.886 } 00:11:35.886 ] 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.886 "name": "Existed_Raid", 00:11:35.886 "uuid": "4a8addd3-191d-4606-84c2-92d4abc8345f", 00:11:35.886 "strip_size_kb": 64, 00:11:35.886 "state": "online", 00:11:35.886 "raid_level": "concat", 00:11:35.886 "superblock": true, 00:11:35.886 "num_base_bdevs": 4, 00:11:35.886 "num_base_bdevs_discovered": 4, 00:11:35.886 "num_base_bdevs_operational": 4, 00:11:35.886 "base_bdevs_list": [ 00:11:35.886 { 00:11:35.886 "name": "BaseBdev1", 00:11:35.886 "uuid": "75dfd880-4482-4dd7-82e0-f3d68671adce", 00:11:35.886 "is_configured": true, 00:11:35.886 "data_offset": 2048, 00:11:35.886 "data_size": 63488 00:11:35.886 }, 00:11:35.886 { 00:11:35.886 "name": "BaseBdev2", 00:11:35.886 "uuid": "d4c23fef-7908-421d-8935-3504d7b8aadf", 00:11:35.886 "is_configured": true, 00:11:35.886 "data_offset": 2048, 00:11:35.886 "data_size": 63488 00:11:35.886 }, 00:11:35.886 { 00:11:35.886 "name": "BaseBdev3", 00:11:35.886 "uuid": "83440d11-53c7-4abc-810f-f5f1f18a9b2b", 00:11:35.886 "is_configured": true, 00:11:35.886 "data_offset": 2048, 00:11:35.886 "data_size": 63488 00:11:35.886 }, 00:11:35.886 { 00:11:35.886 "name": "BaseBdev4", 00:11:35.886 "uuid": "e7e7a94a-6c7f-4e23-91ae-a0b684b13c7f", 00:11:35.886 "is_configured": true, 00:11:35.886 "data_offset": 2048, 00:11:35.886 "data_size": 63488 00:11:35.886 } 00:11:35.886 ] 00:11:35.886 }' 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.886 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.452 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:36.452 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:36.452 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.452 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.452 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.452 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.452 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:36.452 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.452 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.452 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.452 [2024-11-25 15:38:34.960167] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.452 15:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.452 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.452 "name": "Existed_Raid", 00:11:36.452 "aliases": [ 00:11:36.452 "4a8addd3-191d-4606-84c2-92d4abc8345f" 00:11:36.452 ], 00:11:36.452 "product_name": "Raid Volume", 00:11:36.452 "block_size": 512, 00:11:36.452 "num_blocks": 253952, 00:11:36.452 "uuid": "4a8addd3-191d-4606-84c2-92d4abc8345f", 00:11:36.452 "assigned_rate_limits": { 00:11:36.452 "rw_ios_per_sec": 0, 00:11:36.452 "rw_mbytes_per_sec": 0, 00:11:36.452 "r_mbytes_per_sec": 0, 00:11:36.452 "w_mbytes_per_sec": 0 00:11:36.452 }, 00:11:36.452 "claimed": false, 00:11:36.452 "zoned": false, 00:11:36.452 "supported_io_types": { 00:11:36.452 "read": true, 00:11:36.452 "write": true, 00:11:36.452 "unmap": true, 00:11:36.452 "flush": true, 00:11:36.452 "reset": true, 00:11:36.452 "nvme_admin": false, 00:11:36.452 "nvme_io": false, 00:11:36.452 "nvme_io_md": false, 00:11:36.452 "write_zeroes": true, 00:11:36.452 "zcopy": false, 00:11:36.452 "get_zone_info": false, 00:11:36.452 "zone_management": false, 00:11:36.452 "zone_append": false, 00:11:36.452 "compare": false, 00:11:36.452 "compare_and_write": false, 00:11:36.452 "abort": false, 00:11:36.452 "seek_hole": false, 00:11:36.452 "seek_data": false, 00:11:36.452 "copy": false, 00:11:36.452 "nvme_iov_md": false 00:11:36.452 }, 00:11:36.452 "memory_domains": [ 00:11:36.452 { 00:11:36.452 "dma_device_id": "system", 00:11:36.452 "dma_device_type": 1 00:11:36.452 }, 00:11:36.452 { 00:11:36.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.453 "dma_device_type": 2 00:11:36.453 }, 00:11:36.453 { 00:11:36.453 "dma_device_id": "system", 00:11:36.453 "dma_device_type": 1 00:11:36.453 }, 00:11:36.453 { 00:11:36.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.453 "dma_device_type": 2 00:11:36.453 }, 00:11:36.453 { 00:11:36.453 "dma_device_id": "system", 00:11:36.453 "dma_device_type": 1 00:11:36.453 }, 00:11:36.453 { 00:11:36.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.453 "dma_device_type": 2 00:11:36.453 }, 00:11:36.453 { 00:11:36.453 "dma_device_id": "system", 00:11:36.453 "dma_device_type": 1 00:11:36.453 }, 00:11:36.453 { 00:11:36.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.453 "dma_device_type": 2 00:11:36.453 } 00:11:36.453 ], 00:11:36.453 "driver_specific": { 00:11:36.453 "raid": { 00:11:36.453 "uuid": "4a8addd3-191d-4606-84c2-92d4abc8345f", 00:11:36.453 "strip_size_kb": 64, 00:11:36.453 "state": "online", 00:11:36.453 "raid_level": "concat", 00:11:36.453 "superblock": true, 00:11:36.453 "num_base_bdevs": 4, 00:11:36.453 "num_base_bdevs_discovered": 4, 00:11:36.453 "num_base_bdevs_operational": 4, 00:11:36.453 "base_bdevs_list": [ 00:11:36.453 { 00:11:36.453 "name": "BaseBdev1", 00:11:36.453 "uuid": "75dfd880-4482-4dd7-82e0-f3d68671adce", 00:11:36.453 "is_configured": true, 00:11:36.453 "data_offset": 2048, 00:11:36.453 "data_size": 63488 00:11:36.453 }, 00:11:36.453 { 00:11:36.453 "name": "BaseBdev2", 00:11:36.453 "uuid": "d4c23fef-7908-421d-8935-3504d7b8aadf", 00:11:36.453 "is_configured": true, 00:11:36.453 "data_offset": 2048, 00:11:36.453 "data_size": 63488 00:11:36.453 }, 00:11:36.453 { 00:11:36.453 "name": "BaseBdev3", 00:11:36.453 "uuid": "83440d11-53c7-4abc-810f-f5f1f18a9b2b", 00:11:36.453 "is_configured": true, 00:11:36.453 "data_offset": 2048, 00:11:36.453 "data_size": 63488 00:11:36.453 }, 00:11:36.453 { 00:11:36.453 "name": "BaseBdev4", 00:11:36.453 "uuid": "e7e7a94a-6c7f-4e23-91ae-a0b684b13c7f", 00:11:36.453 "is_configured": true, 00:11:36.453 "data_offset": 2048, 00:11:36.453 "data_size": 63488 00:11:36.453 } 00:11:36.453 ] 00:11:36.453 } 00:11:36.453 } 00:11:36.453 }' 00:11:36.453 15:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.453 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:36.453 BaseBdev2 00:11:36.453 BaseBdev3 00:11:36.453 BaseBdev4' 00:11:36.453 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.453 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.453 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.453 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:36.453 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.453 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.453 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.453 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.453 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.453 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.453 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.453 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:36.453 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.453 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.453 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.711 [2024-11-25 15:38:35.283280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.711 [2024-11-25 15:38:35.283311] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.711 [2024-11-25 15:38:35.283359] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.711 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.969 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.969 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.969 "name": "Existed_Raid", 00:11:36.969 "uuid": "4a8addd3-191d-4606-84c2-92d4abc8345f", 00:11:36.969 "strip_size_kb": 64, 00:11:36.969 "state": "offline", 00:11:36.969 "raid_level": "concat", 00:11:36.969 "superblock": true, 00:11:36.969 "num_base_bdevs": 4, 00:11:36.969 "num_base_bdevs_discovered": 3, 00:11:36.969 "num_base_bdevs_operational": 3, 00:11:36.969 "base_bdevs_list": [ 00:11:36.969 { 00:11:36.969 "name": null, 00:11:36.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.969 "is_configured": false, 00:11:36.969 "data_offset": 0, 00:11:36.969 "data_size": 63488 00:11:36.969 }, 00:11:36.969 { 00:11:36.969 "name": "BaseBdev2", 00:11:36.969 "uuid": "d4c23fef-7908-421d-8935-3504d7b8aadf", 00:11:36.969 "is_configured": true, 00:11:36.969 "data_offset": 2048, 00:11:36.969 "data_size": 63488 00:11:36.969 }, 00:11:36.969 { 00:11:36.969 "name": "BaseBdev3", 00:11:36.969 "uuid": "83440d11-53c7-4abc-810f-f5f1f18a9b2b", 00:11:36.969 "is_configured": true, 00:11:36.969 "data_offset": 2048, 00:11:36.969 "data_size": 63488 00:11:36.969 }, 00:11:36.969 { 00:11:36.969 "name": "BaseBdev4", 00:11:36.969 "uuid": "e7e7a94a-6c7f-4e23-91ae-a0b684b13c7f", 00:11:36.969 "is_configured": true, 00:11:36.969 "data_offset": 2048, 00:11:36.969 "data_size": 63488 00:11:36.969 } 00:11:36.969 ] 00:11:36.969 }' 00:11:36.969 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.969 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.228 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:37.228 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.228 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.228 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.228 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.228 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.228 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.228 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.228 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.228 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:37.228 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.228 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.228 [2024-11-25 15:38:35.869301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.486 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.486 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.486 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.486 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.486 15:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.486 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.486 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.486 15:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.486 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.486 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.486 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:37.486 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.486 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.486 [2024-11-25 15:38:36.022170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:37.486 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.486 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.486 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.486 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.487 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.487 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.487 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.487 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.746 [2024-11-25 15:38:36.175772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:37.746 [2024-11-25 15:38:36.175878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.746 BaseBdev2 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.746 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.746 [ 00:11:37.746 { 00:11:37.746 "name": "BaseBdev2", 00:11:37.746 "aliases": [ 00:11:37.746 "37496fc5-f9e0-4223-ab3d-b93598c26cfa" 00:11:37.746 ], 00:11:37.746 "product_name": "Malloc disk", 00:11:37.746 "block_size": 512, 00:11:37.746 "num_blocks": 65536, 00:11:37.746 "uuid": "37496fc5-f9e0-4223-ab3d-b93598c26cfa", 00:11:37.746 "assigned_rate_limits": { 00:11:37.746 "rw_ios_per_sec": 0, 00:11:37.746 "rw_mbytes_per_sec": 0, 00:11:37.746 "r_mbytes_per_sec": 0, 00:11:37.746 "w_mbytes_per_sec": 0 00:11:37.746 }, 00:11:37.746 "claimed": false, 00:11:37.746 "zoned": false, 00:11:37.746 "supported_io_types": { 00:11:37.746 "read": true, 00:11:37.746 "write": true, 00:11:37.746 "unmap": true, 00:11:37.746 "flush": true, 00:11:37.746 "reset": true, 00:11:37.746 "nvme_admin": false, 00:11:37.746 "nvme_io": false, 00:11:37.746 "nvme_io_md": false, 00:11:37.746 "write_zeroes": true, 00:11:37.746 "zcopy": true, 00:11:37.746 "get_zone_info": false, 00:11:37.746 "zone_management": false, 00:11:37.746 "zone_append": false, 00:11:37.746 "compare": false, 00:11:37.746 "compare_and_write": false, 00:11:37.746 "abort": true, 00:11:37.746 "seek_hole": false, 00:11:37.746 "seek_data": false, 00:11:37.746 "copy": true, 00:11:37.746 "nvme_iov_md": false 00:11:37.746 }, 00:11:37.746 "memory_domains": [ 00:11:37.746 { 00:11:37.746 "dma_device_id": "system", 00:11:37.746 "dma_device_type": 1 00:11:37.746 }, 00:11:37.746 { 00:11:37.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.746 "dma_device_type": 2 00:11:37.746 } 00:11:37.746 ], 00:11:37.746 "driver_specific": {} 00:11:37.746 } 00:11:37.746 ] 00:11:37.747 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.747 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:37.747 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.747 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.747 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:37.747 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.747 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.006 BaseBdev3 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.006 [ 00:11:38.006 { 00:11:38.006 "name": "BaseBdev3", 00:11:38.006 "aliases": [ 00:11:38.006 "0c69b411-4d73-40d9-8eb7-ed738f8d4a8b" 00:11:38.006 ], 00:11:38.006 "product_name": "Malloc disk", 00:11:38.006 "block_size": 512, 00:11:38.006 "num_blocks": 65536, 00:11:38.006 "uuid": "0c69b411-4d73-40d9-8eb7-ed738f8d4a8b", 00:11:38.006 "assigned_rate_limits": { 00:11:38.006 "rw_ios_per_sec": 0, 00:11:38.006 "rw_mbytes_per_sec": 0, 00:11:38.006 "r_mbytes_per_sec": 0, 00:11:38.006 "w_mbytes_per_sec": 0 00:11:38.006 }, 00:11:38.006 "claimed": false, 00:11:38.006 "zoned": false, 00:11:38.006 "supported_io_types": { 00:11:38.006 "read": true, 00:11:38.006 "write": true, 00:11:38.006 "unmap": true, 00:11:38.006 "flush": true, 00:11:38.006 "reset": true, 00:11:38.006 "nvme_admin": false, 00:11:38.006 "nvme_io": false, 00:11:38.006 "nvme_io_md": false, 00:11:38.006 "write_zeroes": true, 00:11:38.006 "zcopy": true, 00:11:38.006 "get_zone_info": false, 00:11:38.006 "zone_management": false, 00:11:38.006 "zone_append": false, 00:11:38.006 "compare": false, 00:11:38.006 "compare_and_write": false, 00:11:38.006 "abort": true, 00:11:38.006 "seek_hole": false, 00:11:38.006 "seek_data": false, 00:11:38.006 "copy": true, 00:11:38.006 "nvme_iov_md": false 00:11:38.006 }, 00:11:38.006 "memory_domains": [ 00:11:38.006 { 00:11:38.006 "dma_device_id": "system", 00:11:38.006 "dma_device_type": 1 00:11:38.006 }, 00:11:38.006 { 00:11:38.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.006 "dma_device_type": 2 00:11:38.006 } 00:11:38.006 ], 00:11:38.006 "driver_specific": {} 00:11:38.006 } 00:11:38.006 ] 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.006 BaseBdev4 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:38.006 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.007 [ 00:11:38.007 { 00:11:38.007 "name": "BaseBdev4", 00:11:38.007 "aliases": [ 00:11:38.007 "4fbd6c34-2f87-4862-bef4-89ef0759ec70" 00:11:38.007 ], 00:11:38.007 "product_name": "Malloc disk", 00:11:38.007 "block_size": 512, 00:11:38.007 "num_blocks": 65536, 00:11:38.007 "uuid": "4fbd6c34-2f87-4862-bef4-89ef0759ec70", 00:11:38.007 "assigned_rate_limits": { 00:11:38.007 "rw_ios_per_sec": 0, 00:11:38.007 "rw_mbytes_per_sec": 0, 00:11:38.007 "r_mbytes_per_sec": 0, 00:11:38.007 "w_mbytes_per_sec": 0 00:11:38.007 }, 00:11:38.007 "claimed": false, 00:11:38.007 "zoned": false, 00:11:38.007 "supported_io_types": { 00:11:38.007 "read": true, 00:11:38.007 "write": true, 00:11:38.007 "unmap": true, 00:11:38.007 "flush": true, 00:11:38.007 "reset": true, 00:11:38.007 "nvme_admin": false, 00:11:38.007 "nvme_io": false, 00:11:38.007 "nvme_io_md": false, 00:11:38.007 "write_zeroes": true, 00:11:38.007 "zcopy": true, 00:11:38.007 "get_zone_info": false, 00:11:38.007 "zone_management": false, 00:11:38.007 "zone_append": false, 00:11:38.007 "compare": false, 00:11:38.007 "compare_and_write": false, 00:11:38.007 "abort": true, 00:11:38.007 "seek_hole": false, 00:11:38.007 "seek_data": false, 00:11:38.007 "copy": true, 00:11:38.007 "nvme_iov_md": false 00:11:38.007 }, 00:11:38.007 "memory_domains": [ 00:11:38.007 { 00:11:38.007 "dma_device_id": "system", 00:11:38.007 "dma_device_type": 1 00:11:38.007 }, 00:11:38.007 { 00:11:38.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.007 "dma_device_type": 2 00:11:38.007 } 00:11:38.007 ], 00:11:38.007 "driver_specific": {} 00:11:38.007 } 00:11:38.007 ] 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.007 [2024-11-25 15:38:36.564455] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:38.007 [2024-11-25 15:38:36.564538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:38.007 [2024-11-25 15:38:36.564593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.007 [2024-11-25 15:38:36.566335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.007 [2024-11-25 15:38:36.566440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.007 "name": "Existed_Raid", 00:11:38.007 "uuid": "538cab67-6c0d-45f8-b694-89e5909e49fe", 00:11:38.007 "strip_size_kb": 64, 00:11:38.007 "state": "configuring", 00:11:38.007 "raid_level": "concat", 00:11:38.007 "superblock": true, 00:11:38.007 "num_base_bdevs": 4, 00:11:38.007 "num_base_bdevs_discovered": 3, 00:11:38.007 "num_base_bdevs_operational": 4, 00:11:38.007 "base_bdevs_list": [ 00:11:38.007 { 00:11:38.007 "name": "BaseBdev1", 00:11:38.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.007 "is_configured": false, 00:11:38.007 "data_offset": 0, 00:11:38.007 "data_size": 0 00:11:38.007 }, 00:11:38.007 { 00:11:38.007 "name": "BaseBdev2", 00:11:38.007 "uuid": "37496fc5-f9e0-4223-ab3d-b93598c26cfa", 00:11:38.007 "is_configured": true, 00:11:38.007 "data_offset": 2048, 00:11:38.007 "data_size": 63488 00:11:38.007 }, 00:11:38.007 { 00:11:38.007 "name": "BaseBdev3", 00:11:38.007 "uuid": "0c69b411-4d73-40d9-8eb7-ed738f8d4a8b", 00:11:38.007 "is_configured": true, 00:11:38.007 "data_offset": 2048, 00:11:38.007 "data_size": 63488 00:11:38.007 }, 00:11:38.007 { 00:11:38.007 "name": "BaseBdev4", 00:11:38.007 "uuid": "4fbd6c34-2f87-4862-bef4-89ef0759ec70", 00:11:38.007 "is_configured": true, 00:11:38.007 "data_offset": 2048, 00:11:38.007 "data_size": 63488 00:11:38.007 } 00:11:38.007 ] 00:11:38.007 }' 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.007 15:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.575 [2024-11-25 15:38:37.027673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.575 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.575 "name": "Existed_Raid", 00:11:38.575 "uuid": "538cab67-6c0d-45f8-b694-89e5909e49fe", 00:11:38.576 "strip_size_kb": 64, 00:11:38.576 "state": "configuring", 00:11:38.576 "raid_level": "concat", 00:11:38.576 "superblock": true, 00:11:38.576 "num_base_bdevs": 4, 00:11:38.576 "num_base_bdevs_discovered": 2, 00:11:38.576 "num_base_bdevs_operational": 4, 00:11:38.576 "base_bdevs_list": [ 00:11:38.576 { 00:11:38.576 "name": "BaseBdev1", 00:11:38.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.576 "is_configured": false, 00:11:38.576 "data_offset": 0, 00:11:38.576 "data_size": 0 00:11:38.576 }, 00:11:38.576 { 00:11:38.576 "name": null, 00:11:38.576 "uuid": "37496fc5-f9e0-4223-ab3d-b93598c26cfa", 00:11:38.576 "is_configured": false, 00:11:38.576 "data_offset": 0, 00:11:38.576 "data_size": 63488 00:11:38.576 }, 00:11:38.576 { 00:11:38.576 "name": "BaseBdev3", 00:11:38.576 "uuid": "0c69b411-4d73-40d9-8eb7-ed738f8d4a8b", 00:11:38.576 "is_configured": true, 00:11:38.576 "data_offset": 2048, 00:11:38.576 "data_size": 63488 00:11:38.576 }, 00:11:38.576 { 00:11:38.576 "name": "BaseBdev4", 00:11:38.576 "uuid": "4fbd6c34-2f87-4862-bef4-89ef0759ec70", 00:11:38.576 "is_configured": true, 00:11:38.576 "data_offset": 2048, 00:11:38.576 "data_size": 63488 00:11:38.576 } 00:11:38.576 ] 00:11:38.576 }' 00:11:38.576 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.576 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.834 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.834 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.834 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.835 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:38.835 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.835 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:38.835 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:38.835 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.835 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.094 [2024-11-25 15:38:37.551512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.094 BaseBdev1 00:11:39.094 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.094 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:39.094 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:39.094 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:39.094 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:39.094 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:39.094 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:39.094 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:39.094 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.094 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.094 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.094 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:39.094 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.094 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.094 [ 00:11:39.094 { 00:11:39.094 "name": "BaseBdev1", 00:11:39.094 "aliases": [ 00:11:39.094 "3cb55370-1768-48fc-adb4-7f671117a2ad" 00:11:39.094 ], 00:11:39.094 "product_name": "Malloc disk", 00:11:39.094 "block_size": 512, 00:11:39.094 "num_blocks": 65536, 00:11:39.094 "uuid": "3cb55370-1768-48fc-adb4-7f671117a2ad", 00:11:39.094 "assigned_rate_limits": { 00:11:39.094 "rw_ios_per_sec": 0, 00:11:39.094 "rw_mbytes_per_sec": 0, 00:11:39.094 "r_mbytes_per_sec": 0, 00:11:39.094 "w_mbytes_per_sec": 0 00:11:39.094 }, 00:11:39.094 "claimed": true, 00:11:39.094 "claim_type": "exclusive_write", 00:11:39.094 "zoned": false, 00:11:39.094 "supported_io_types": { 00:11:39.094 "read": true, 00:11:39.094 "write": true, 00:11:39.094 "unmap": true, 00:11:39.094 "flush": true, 00:11:39.094 "reset": true, 00:11:39.094 "nvme_admin": false, 00:11:39.094 "nvme_io": false, 00:11:39.094 "nvme_io_md": false, 00:11:39.094 "write_zeroes": true, 00:11:39.094 "zcopy": true, 00:11:39.094 "get_zone_info": false, 00:11:39.094 "zone_management": false, 00:11:39.094 "zone_append": false, 00:11:39.094 "compare": false, 00:11:39.094 "compare_and_write": false, 00:11:39.094 "abort": true, 00:11:39.094 "seek_hole": false, 00:11:39.094 "seek_data": false, 00:11:39.094 "copy": true, 00:11:39.094 "nvme_iov_md": false 00:11:39.094 }, 00:11:39.094 "memory_domains": [ 00:11:39.094 { 00:11:39.094 "dma_device_id": "system", 00:11:39.094 "dma_device_type": 1 00:11:39.094 }, 00:11:39.094 { 00:11:39.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.094 "dma_device_type": 2 00:11:39.094 } 00:11:39.094 ], 00:11:39.094 "driver_specific": {} 00:11:39.094 } 00:11:39.094 ] 00:11:39.094 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.095 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:39.095 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:39.095 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.095 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.095 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.095 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.095 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.095 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.095 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.095 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.095 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.095 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.095 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.095 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.095 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.095 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.095 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.095 "name": "Existed_Raid", 00:11:39.095 "uuid": "538cab67-6c0d-45f8-b694-89e5909e49fe", 00:11:39.095 "strip_size_kb": 64, 00:11:39.095 "state": "configuring", 00:11:39.095 "raid_level": "concat", 00:11:39.095 "superblock": true, 00:11:39.095 "num_base_bdevs": 4, 00:11:39.095 "num_base_bdevs_discovered": 3, 00:11:39.095 "num_base_bdevs_operational": 4, 00:11:39.095 "base_bdevs_list": [ 00:11:39.095 { 00:11:39.095 "name": "BaseBdev1", 00:11:39.095 "uuid": "3cb55370-1768-48fc-adb4-7f671117a2ad", 00:11:39.095 "is_configured": true, 00:11:39.095 "data_offset": 2048, 00:11:39.095 "data_size": 63488 00:11:39.095 }, 00:11:39.095 { 00:11:39.095 "name": null, 00:11:39.095 "uuid": "37496fc5-f9e0-4223-ab3d-b93598c26cfa", 00:11:39.095 "is_configured": false, 00:11:39.095 "data_offset": 0, 00:11:39.095 "data_size": 63488 00:11:39.095 }, 00:11:39.095 { 00:11:39.095 "name": "BaseBdev3", 00:11:39.095 "uuid": "0c69b411-4d73-40d9-8eb7-ed738f8d4a8b", 00:11:39.095 "is_configured": true, 00:11:39.095 "data_offset": 2048, 00:11:39.095 "data_size": 63488 00:11:39.095 }, 00:11:39.095 { 00:11:39.095 "name": "BaseBdev4", 00:11:39.095 "uuid": "4fbd6c34-2f87-4862-bef4-89ef0759ec70", 00:11:39.095 "is_configured": true, 00:11:39.095 "data_offset": 2048, 00:11:39.095 "data_size": 63488 00:11:39.095 } 00:11:39.095 ] 00:11:39.095 }' 00:11:39.095 15:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.095 15:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.663 [2024-11-25 15:38:38.074712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.663 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.663 "name": "Existed_Raid", 00:11:39.663 "uuid": "538cab67-6c0d-45f8-b694-89e5909e49fe", 00:11:39.663 "strip_size_kb": 64, 00:11:39.663 "state": "configuring", 00:11:39.663 "raid_level": "concat", 00:11:39.663 "superblock": true, 00:11:39.663 "num_base_bdevs": 4, 00:11:39.663 "num_base_bdevs_discovered": 2, 00:11:39.663 "num_base_bdevs_operational": 4, 00:11:39.663 "base_bdevs_list": [ 00:11:39.663 { 00:11:39.663 "name": "BaseBdev1", 00:11:39.663 "uuid": "3cb55370-1768-48fc-adb4-7f671117a2ad", 00:11:39.663 "is_configured": true, 00:11:39.663 "data_offset": 2048, 00:11:39.663 "data_size": 63488 00:11:39.663 }, 00:11:39.663 { 00:11:39.663 "name": null, 00:11:39.663 "uuid": "37496fc5-f9e0-4223-ab3d-b93598c26cfa", 00:11:39.663 "is_configured": false, 00:11:39.663 "data_offset": 0, 00:11:39.664 "data_size": 63488 00:11:39.664 }, 00:11:39.664 { 00:11:39.664 "name": null, 00:11:39.664 "uuid": "0c69b411-4d73-40d9-8eb7-ed738f8d4a8b", 00:11:39.664 "is_configured": false, 00:11:39.664 "data_offset": 0, 00:11:39.664 "data_size": 63488 00:11:39.664 }, 00:11:39.664 { 00:11:39.664 "name": "BaseBdev4", 00:11:39.664 "uuid": "4fbd6c34-2f87-4862-bef4-89ef0759ec70", 00:11:39.664 "is_configured": true, 00:11:39.664 "data_offset": 2048, 00:11:39.664 "data_size": 63488 00:11:39.664 } 00:11:39.664 ] 00:11:39.664 }' 00:11:39.664 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.664 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.923 [2024-11-25 15:38:38.485987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.923 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.923 "name": "Existed_Raid", 00:11:39.923 "uuid": "538cab67-6c0d-45f8-b694-89e5909e49fe", 00:11:39.923 "strip_size_kb": 64, 00:11:39.923 "state": "configuring", 00:11:39.923 "raid_level": "concat", 00:11:39.923 "superblock": true, 00:11:39.923 "num_base_bdevs": 4, 00:11:39.923 "num_base_bdevs_discovered": 3, 00:11:39.923 "num_base_bdevs_operational": 4, 00:11:39.923 "base_bdevs_list": [ 00:11:39.923 { 00:11:39.923 "name": "BaseBdev1", 00:11:39.923 "uuid": "3cb55370-1768-48fc-adb4-7f671117a2ad", 00:11:39.923 "is_configured": true, 00:11:39.923 "data_offset": 2048, 00:11:39.923 "data_size": 63488 00:11:39.923 }, 00:11:39.923 { 00:11:39.923 "name": null, 00:11:39.923 "uuid": "37496fc5-f9e0-4223-ab3d-b93598c26cfa", 00:11:39.923 "is_configured": false, 00:11:39.923 "data_offset": 0, 00:11:39.923 "data_size": 63488 00:11:39.923 }, 00:11:39.923 { 00:11:39.923 "name": "BaseBdev3", 00:11:39.923 "uuid": "0c69b411-4d73-40d9-8eb7-ed738f8d4a8b", 00:11:39.923 "is_configured": true, 00:11:39.923 "data_offset": 2048, 00:11:39.923 "data_size": 63488 00:11:39.923 }, 00:11:39.923 { 00:11:39.923 "name": "BaseBdev4", 00:11:39.924 "uuid": "4fbd6c34-2f87-4862-bef4-89ef0759ec70", 00:11:39.924 "is_configured": true, 00:11:39.924 "data_offset": 2048, 00:11:39.924 "data_size": 63488 00:11:39.924 } 00:11:39.924 ] 00:11:39.924 }' 00:11:39.924 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.924 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.492 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.492 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:40.492 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.492 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.492 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.492 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:40.492 15:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:40.492 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.492 15:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.492 [2024-11-25 15:38:38.941241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:40.492 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.492 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:40.492 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.492 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.492 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.492 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.492 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.492 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.492 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.492 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.492 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.492 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.492 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.492 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.492 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.492 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.492 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.492 "name": "Existed_Raid", 00:11:40.492 "uuid": "538cab67-6c0d-45f8-b694-89e5909e49fe", 00:11:40.492 "strip_size_kb": 64, 00:11:40.492 "state": "configuring", 00:11:40.492 "raid_level": "concat", 00:11:40.492 "superblock": true, 00:11:40.492 "num_base_bdevs": 4, 00:11:40.492 "num_base_bdevs_discovered": 2, 00:11:40.492 "num_base_bdevs_operational": 4, 00:11:40.492 "base_bdevs_list": [ 00:11:40.492 { 00:11:40.492 "name": null, 00:11:40.492 "uuid": "3cb55370-1768-48fc-adb4-7f671117a2ad", 00:11:40.492 "is_configured": false, 00:11:40.492 "data_offset": 0, 00:11:40.492 "data_size": 63488 00:11:40.492 }, 00:11:40.492 { 00:11:40.492 "name": null, 00:11:40.492 "uuid": "37496fc5-f9e0-4223-ab3d-b93598c26cfa", 00:11:40.492 "is_configured": false, 00:11:40.492 "data_offset": 0, 00:11:40.492 "data_size": 63488 00:11:40.492 }, 00:11:40.492 { 00:11:40.492 "name": "BaseBdev3", 00:11:40.492 "uuid": "0c69b411-4d73-40d9-8eb7-ed738f8d4a8b", 00:11:40.492 "is_configured": true, 00:11:40.492 "data_offset": 2048, 00:11:40.492 "data_size": 63488 00:11:40.492 }, 00:11:40.492 { 00:11:40.492 "name": "BaseBdev4", 00:11:40.492 "uuid": "4fbd6c34-2f87-4862-bef4-89ef0759ec70", 00:11:40.492 "is_configured": true, 00:11:40.492 "data_offset": 2048, 00:11:40.492 "data_size": 63488 00:11:40.492 } 00:11:40.492 ] 00:11:40.492 }' 00:11:40.492 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.492 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.060 [2024-11-25 15:38:39.484601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.060 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.060 "name": "Existed_Raid", 00:11:41.060 "uuid": "538cab67-6c0d-45f8-b694-89e5909e49fe", 00:11:41.060 "strip_size_kb": 64, 00:11:41.060 "state": "configuring", 00:11:41.060 "raid_level": "concat", 00:11:41.060 "superblock": true, 00:11:41.060 "num_base_bdevs": 4, 00:11:41.060 "num_base_bdevs_discovered": 3, 00:11:41.060 "num_base_bdevs_operational": 4, 00:11:41.060 "base_bdevs_list": [ 00:11:41.060 { 00:11:41.060 "name": null, 00:11:41.060 "uuid": "3cb55370-1768-48fc-adb4-7f671117a2ad", 00:11:41.060 "is_configured": false, 00:11:41.060 "data_offset": 0, 00:11:41.060 "data_size": 63488 00:11:41.060 }, 00:11:41.060 { 00:11:41.060 "name": "BaseBdev2", 00:11:41.060 "uuid": "37496fc5-f9e0-4223-ab3d-b93598c26cfa", 00:11:41.060 "is_configured": true, 00:11:41.060 "data_offset": 2048, 00:11:41.060 "data_size": 63488 00:11:41.060 }, 00:11:41.060 { 00:11:41.060 "name": "BaseBdev3", 00:11:41.060 "uuid": "0c69b411-4d73-40d9-8eb7-ed738f8d4a8b", 00:11:41.060 "is_configured": true, 00:11:41.060 "data_offset": 2048, 00:11:41.060 "data_size": 63488 00:11:41.060 }, 00:11:41.060 { 00:11:41.060 "name": "BaseBdev4", 00:11:41.060 "uuid": "4fbd6c34-2f87-4862-bef4-89ef0759ec70", 00:11:41.060 "is_configured": true, 00:11:41.060 "data_offset": 2048, 00:11:41.060 "data_size": 63488 00:11:41.060 } 00:11:41.060 ] 00:11:41.061 }' 00:11:41.061 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.061 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.320 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.320 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.320 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.320 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.320 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.320 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:41.320 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.320 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:41.320 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.320 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.320 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.320 15:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3cb55370-1768-48fc-adb4-7f671117a2ad 00:11:41.320 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.320 15:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.580 [2024-11-25 15:38:40.027538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:41.580 [2024-11-25 15:38:40.027853] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:41.580 NewBaseBdev 00:11:41.580 [2024-11-25 15:38:40.027901] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:41.580 [2024-11-25 15:38:40.028180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:41.580 [2024-11-25 15:38:40.028327] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:41.580 [2024-11-25 15:38:40.028340] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:41.580 [2024-11-25 15:38:40.028467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.580 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.580 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:41.580 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:41.580 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:41.580 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:41.580 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:41.580 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:41.580 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:41.580 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.580 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.580 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.580 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:41.580 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.580 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.580 [ 00:11:41.580 { 00:11:41.580 "name": "NewBaseBdev", 00:11:41.580 "aliases": [ 00:11:41.580 "3cb55370-1768-48fc-adb4-7f671117a2ad" 00:11:41.580 ], 00:11:41.580 "product_name": "Malloc disk", 00:11:41.580 "block_size": 512, 00:11:41.580 "num_blocks": 65536, 00:11:41.580 "uuid": "3cb55370-1768-48fc-adb4-7f671117a2ad", 00:11:41.580 "assigned_rate_limits": { 00:11:41.580 "rw_ios_per_sec": 0, 00:11:41.580 "rw_mbytes_per_sec": 0, 00:11:41.580 "r_mbytes_per_sec": 0, 00:11:41.580 "w_mbytes_per_sec": 0 00:11:41.580 }, 00:11:41.580 "claimed": true, 00:11:41.580 "claim_type": "exclusive_write", 00:11:41.580 "zoned": false, 00:11:41.580 "supported_io_types": { 00:11:41.580 "read": true, 00:11:41.580 "write": true, 00:11:41.580 "unmap": true, 00:11:41.580 "flush": true, 00:11:41.580 "reset": true, 00:11:41.580 "nvme_admin": false, 00:11:41.580 "nvme_io": false, 00:11:41.580 "nvme_io_md": false, 00:11:41.581 "write_zeroes": true, 00:11:41.581 "zcopy": true, 00:11:41.581 "get_zone_info": false, 00:11:41.581 "zone_management": false, 00:11:41.581 "zone_append": false, 00:11:41.581 "compare": false, 00:11:41.581 "compare_and_write": false, 00:11:41.581 "abort": true, 00:11:41.581 "seek_hole": false, 00:11:41.581 "seek_data": false, 00:11:41.581 "copy": true, 00:11:41.581 "nvme_iov_md": false 00:11:41.581 }, 00:11:41.581 "memory_domains": [ 00:11:41.581 { 00:11:41.581 "dma_device_id": "system", 00:11:41.581 "dma_device_type": 1 00:11:41.581 }, 00:11:41.581 { 00:11:41.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.581 "dma_device_type": 2 00:11:41.581 } 00:11:41.581 ], 00:11:41.581 "driver_specific": {} 00:11:41.581 } 00:11:41.581 ] 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.581 "name": "Existed_Raid", 00:11:41.581 "uuid": "538cab67-6c0d-45f8-b694-89e5909e49fe", 00:11:41.581 "strip_size_kb": 64, 00:11:41.581 "state": "online", 00:11:41.581 "raid_level": "concat", 00:11:41.581 "superblock": true, 00:11:41.581 "num_base_bdevs": 4, 00:11:41.581 "num_base_bdevs_discovered": 4, 00:11:41.581 "num_base_bdevs_operational": 4, 00:11:41.581 "base_bdevs_list": [ 00:11:41.581 { 00:11:41.581 "name": "NewBaseBdev", 00:11:41.581 "uuid": "3cb55370-1768-48fc-adb4-7f671117a2ad", 00:11:41.581 "is_configured": true, 00:11:41.581 "data_offset": 2048, 00:11:41.581 "data_size": 63488 00:11:41.581 }, 00:11:41.581 { 00:11:41.581 "name": "BaseBdev2", 00:11:41.581 "uuid": "37496fc5-f9e0-4223-ab3d-b93598c26cfa", 00:11:41.581 "is_configured": true, 00:11:41.581 "data_offset": 2048, 00:11:41.581 "data_size": 63488 00:11:41.581 }, 00:11:41.581 { 00:11:41.581 "name": "BaseBdev3", 00:11:41.581 "uuid": "0c69b411-4d73-40d9-8eb7-ed738f8d4a8b", 00:11:41.581 "is_configured": true, 00:11:41.581 "data_offset": 2048, 00:11:41.581 "data_size": 63488 00:11:41.581 }, 00:11:41.581 { 00:11:41.581 "name": "BaseBdev4", 00:11:41.581 "uuid": "4fbd6c34-2f87-4862-bef4-89ef0759ec70", 00:11:41.581 "is_configured": true, 00:11:41.581 "data_offset": 2048, 00:11:41.581 "data_size": 63488 00:11:41.581 } 00:11:41.581 ] 00:11:41.581 }' 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.581 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.840 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:41.840 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:41.840 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:41.840 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:41.840 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:41.840 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:41.840 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:41.840 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.840 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.840 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:41.840 [2024-11-25 15:38:40.499161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.840 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.099 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:42.099 "name": "Existed_Raid", 00:11:42.099 "aliases": [ 00:11:42.099 "538cab67-6c0d-45f8-b694-89e5909e49fe" 00:11:42.099 ], 00:11:42.099 "product_name": "Raid Volume", 00:11:42.099 "block_size": 512, 00:11:42.099 "num_blocks": 253952, 00:11:42.099 "uuid": "538cab67-6c0d-45f8-b694-89e5909e49fe", 00:11:42.099 "assigned_rate_limits": { 00:11:42.099 "rw_ios_per_sec": 0, 00:11:42.099 "rw_mbytes_per_sec": 0, 00:11:42.099 "r_mbytes_per_sec": 0, 00:11:42.099 "w_mbytes_per_sec": 0 00:11:42.099 }, 00:11:42.099 "claimed": false, 00:11:42.099 "zoned": false, 00:11:42.099 "supported_io_types": { 00:11:42.099 "read": true, 00:11:42.099 "write": true, 00:11:42.099 "unmap": true, 00:11:42.099 "flush": true, 00:11:42.099 "reset": true, 00:11:42.099 "nvme_admin": false, 00:11:42.099 "nvme_io": false, 00:11:42.099 "nvme_io_md": false, 00:11:42.099 "write_zeroes": true, 00:11:42.099 "zcopy": false, 00:11:42.099 "get_zone_info": false, 00:11:42.099 "zone_management": false, 00:11:42.099 "zone_append": false, 00:11:42.099 "compare": false, 00:11:42.099 "compare_and_write": false, 00:11:42.099 "abort": false, 00:11:42.099 "seek_hole": false, 00:11:42.099 "seek_data": false, 00:11:42.099 "copy": false, 00:11:42.099 "nvme_iov_md": false 00:11:42.099 }, 00:11:42.099 "memory_domains": [ 00:11:42.099 { 00:11:42.100 "dma_device_id": "system", 00:11:42.100 "dma_device_type": 1 00:11:42.100 }, 00:11:42.100 { 00:11:42.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.100 "dma_device_type": 2 00:11:42.100 }, 00:11:42.100 { 00:11:42.100 "dma_device_id": "system", 00:11:42.100 "dma_device_type": 1 00:11:42.100 }, 00:11:42.100 { 00:11:42.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.100 "dma_device_type": 2 00:11:42.100 }, 00:11:42.100 { 00:11:42.100 "dma_device_id": "system", 00:11:42.100 "dma_device_type": 1 00:11:42.100 }, 00:11:42.100 { 00:11:42.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.100 "dma_device_type": 2 00:11:42.100 }, 00:11:42.100 { 00:11:42.100 "dma_device_id": "system", 00:11:42.100 "dma_device_type": 1 00:11:42.100 }, 00:11:42.100 { 00:11:42.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.100 "dma_device_type": 2 00:11:42.100 } 00:11:42.100 ], 00:11:42.100 "driver_specific": { 00:11:42.100 "raid": { 00:11:42.100 "uuid": "538cab67-6c0d-45f8-b694-89e5909e49fe", 00:11:42.100 "strip_size_kb": 64, 00:11:42.100 "state": "online", 00:11:42.100 "raid_level": "concat", 00:11:42.100 "superblock": true, 00:11:42.100 "num_base_bdevs": 4, 00:11:42.100 "num_base_bdevs_discovered": 4, 00:11:42.100 "num_base_bdevs_operational": 4, 00:11:42.100 "base_bdevs_list": [ 00:11:42.100 { 00:11:42.100 "name": "NewBaseBdev", 00:11:42.100 "uuid": "3cb55370-1768-48fc-adb4-7f671117a2ad", 00:11:42.100 "is_configured": true, 00:11:42.100 "data_offset": 2048, 00:11:42.100 "data_size": 63488 00:11:42.100 }, 00:11:42.100 { 00:11:42.100 "name": "BaseBdev2", 00:11:42.100 "uuid": "37496fc5-f9e0-4223-ab3d-b93598c26cfa", 00:11:42.100 "is_configured": true, 00:11:42.100 "data_offset": 2048, 00:11:42.100 "data_size": 63488 00:11:42.100 }, 00:11:42.100 { 00:11:42.100 "name": "BaseBdev3", 00:11:42.100 "uuid": "0c69b411-4d73-40d9-8eb7-ed738f8d4a8b", 00:11:42.100 "is_configured": true, 00:11:42.100 "data_offset": 2048, 00:11:42.100 "data_size": 63488 00:11:42.100 }, 00:11:42.100 { 00:11:42.100 "name": "BaseBdev4", 00:11:42.100 "uuid": "4fbd6c34-2f87-4862-bef4-89ef0759ec70", 00:11:42.100 "is_configured": true, 00:11:42.100 "data_offset": 2048, 00:11:42.100 "data_size": 63488 00:11:42.100 } 00:11:42.100 ] 00:11:42.100 } 00:11:42.100 } 00:11:42.100 }' 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:42.100 BaseBdev2 00:11:42.100 BaseBdev3 00:11:42.100 BaseBdev4' 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.100 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.359 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.359 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.359 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:42.359 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.359 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.359 [2024-11-25 15:38:40.798303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.359 [2024-11-25 15:38:40.798372] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.359 [2024-11-25 15:38:40.798460] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.359 [2024-11-25 15:38:40.798526] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.359 [2024-11-25 15:38:40.798536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:42.359 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.359 15:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71689 00:11:42.359 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71689 ']' 00:11:42.360 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71689 00:11:42.360 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:42.360 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.360 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71689 00:11:42.360 killing process with pid 71689 00:11:42.360 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.360 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.360 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71689' 00:11:42.360 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71689 00:11:42.360 [2024-11-25 15:38:40.844711] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.360 15:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71689 00:11:42.618 [2024-11-25 15:38:41.222944] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.995 15:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:43.995 00:11:43.995 real 0m11.183s 00:11:43.995 user 0m17.831s 00:11:43.995 sys 0m1.961s 00:11:43.995 15:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.995 ************************************ 00:11:43.995 END TEST raid_state_function_test_sb 00:11:43.995 ************************************ 00:11:43.995 15:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.995 15:38:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:43.995 15:38:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:43.995 15:38:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.995 15:38:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:43.995 ************************************ 00:11:43.995 START TEST raid_superblock_test 00:11:43.995 ************************************ 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72354 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72354 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72354 ']' 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.995 15:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.995 [2024-11-25 15:38:42.441223] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:11:43.995 [2024-11-25 15:38:42.441431] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72354 ] 00:11:43.995 [2024-11-25 15:38:42.613583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.254 [2024-11-25 15:38:42.724105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.254 [2024-11-25 15:38:42.922337] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.254 [2024-11-25 15:38:42.922437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.823 malloc1 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.823 [2024-11-25 15:38:43.317957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:44.823 [2024-11-25 15:38:43.318070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.823 [2024-11-25 15:38:43.318113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:44.823 [2024-11-25 15:38:43.318147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.823 [2024-11-25 15:38:43.320187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.823 [2024-11-25 15:38:43.320266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:44.823 pt1 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.823 malloc2 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.823 [2024-11-25 15:38:43.374207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:44.823 [2024-11-25 15:38:43.374261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.823 [2024-11-25 15:38:43.374283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:44.823 [2024-11-25 15:38:43.374291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.823 [2024-11-25 15:38:43.376264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.823 [2024-11-25 15:38:43.376301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:44.823 pt2 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:44.823 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.824 malloc3 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.824 [2024-11-25 15:38:43.438143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:44.824 [2024-11-25 15:38:43.438257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.824 [2024-11-25 15:38:43.438296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:44.824 [2024-11-25 15:38:43.438328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.824 [2024-11-25 15:38:43.440348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.824 [2024-11-25 15:38:43.440443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:44.824 pt3 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.824 malloc4 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.824 [2024-11-25 15:38:43.495786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:44.824 [2024-11-25 15:38:43.495897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.824 [2024-11-25 15:38:43.495947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:44.824 [2024-11-25 15:38:43.495975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.824 [2024-11-25 15:38:43.498075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.824 [2024-11-25 15:38:43.498140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:44.824 pt4 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.824 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:45.083 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.083 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.083 [2024-11-25 15:38:43.507801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:45.083 [2024-11-25 15:38:43.509672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:45.083 [2024-11-25 15:38:43.509783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:45.083 [2024-11-25 15:38:43.509880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:45.083 [2024-11-25 15:38:43.510136] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:45.083 [2024-11-25 15:38:43.510192] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:45.083 [2024-11-25 15:38:43.510495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:45.083 [2024-11-25 15:38:43.510738] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:45.083 [2024-11-25 15:38:43.510759] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:45.083 [2024-11-25 15:38:43.510921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.083 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.083 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:45.083 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.083 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.083 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.083 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.083 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.083 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.083 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.084 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.084 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.084 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.084 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.084 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.084 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.084 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.084 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.084 "name": "raid_bdev1", 00:11:45.084 "uuid": "3264bddd-6fb1-43f8-9feb-906791130b9c", 00:11:45.084 "strip_size_kb": 64, 00:11:45.084 "state": "online", 00:11:45.084 "raid_level": "concat", 00:11:45.084 "superblock": true, 00:11:45.084 "num_base_bdevs": 4, 00:11:45.084 "num_base_bdevs_discovered": 4, 00:11:45.084 "num_base_bdevs_operational": 4, 00:11:45.084 "base_bdevs_list": [ 00:11:45.084 { 00:11:45.084 "name": "pt1", 00:11:45.084 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.084 "is_configured": true, 00:11:45.084 "data_offset": 2048, 00:11:45.084 "data_size": 63488 00:11:45.084 }, 00:11:45.084 { 00:11:45.084 "name": "pt2", 00:11:45.084 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.084 "is_configured": true, 00:11:45.084 "data_offset": 2048, 00:11:45.084 "data_size": 63488 00:11:45.084 }, 00:11:45.084 { 00:11:45.084 "name": "pt3", 00:11:45.084 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.084 "is_configured": true, 00:11:45.084 "data_offset": 2048, 00:11:45.084 "data_size": 63488 00:11:45.084 }, 00:11:45.084 { 00:11:45.084 "name": "pt4", 00:11:45.084 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.084 "is_configured": true, 00:11:45.084 "data_offset": 2048, 00:11:45.084 "data_size": 63488 00:11:45.084 } 00:11:45.084 ] 00:11:45.084 }' 00:11:45.084 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.084 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.343 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:45.343 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:45.343 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:45.343 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:45.343 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:45.343 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:45.343 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:45.343 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:45.343 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.343 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.343 [2024-11-25 15:38:43.963394] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.343 15:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.343 15:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:45.343 "name": "raid_bdev1", 00:11:45.343 "aliases": [ 00:11:45.343 "3264bddd-6fb1-43f8-9feb-906791130b9c" 00:11:45.343 ], 00:11:45.343 "product_name": "Raid Volume", 00:11:45.343 "block_size": 512, 00:11:45.343 "num_blocks": 253952, 00:11:45.343 "uuid": "3264bddd-6fb1-43f8-9feb-906791130b9c", 00:11:45.343 "assigned_rate_limits": { 00:11:45.343 "rw_ios_per_sec": 0, 00:11:45.343 "rw_mbytes_per_sec": 0, 00:11:45.343 "r_mbytes_per_sec": 0, 00:11:45.343 "w_mbytes_per_sec": 0 00:11:45.343 }, 00:11:45.343 "claimed": false, 00:11:45.343 "zoned": false, 00:11:45.343 "supported_io_types": { 00:11:45.343 "read": true, 00:11:45.343 "write": true, 00:11:45.343 "unmap": true, 00:11:45.343 "flush": true, 00:11:45.343 "reset": true, 00:11:45.343 "nvme_admin": false, 00:11:45.343 "nvme_io": false, 00:11:45.343 "nvme_io_md": false, 00:11:45.343 "write_zeroes": true, 00:11:45.343 "zcopy": false, 00:11:45.343 "get_zone_info": false, 00:11:45.343 "zone_management": false, 00:11:45.343 "zone_append": false, 00:11:45.343 "compare": false, 00:11:45.343 "compare_and_write": false, 00:11:45.343 "abort": false, 00:11:45.343 "seek_hole": false, 00:11:45.343 "seek_data": false, 00:11:45.343 "copy": false, 00:11:45.343 "nvme_iov_md": false 00:11:45.343 }, 00:11:45.343 "memory_domains": [ 00:11:45.343 { 00:11:45.343 "dma_device_id": "system", 00:11:45.343 "dma_device_type": 1 00:11:45.343 }, 00:11:45.343 { 00:11:45.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.343 "dma_device_type": 2 00:11:45.343 }, 00:11:45.343 { 00:11:45.343 "dma_device_id": "system", 00:11:45.343 "dma_device_type": 1 00:11:45.343 }, 00:11:45.343 { 00:11:45.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.343 "dma_device_type": 2 00:11:45.343 }, 00:11:45.343 { 00:11:45.343 "dma_device_id": "system", 00:11:45.343 "dma_device_type": 1 00:11:45.343 }, 00:11:45.343 { 00:11:45.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.343 "dma_device_type": 2 00:11:45.343 }, 00:11:45.343 { 00:11:45.343 "dma_device_id": "system", 00:11:45.343 "dma_device_type": 1 00:11:45.343 }, 00:11:45.343 { 00:11:45.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.343 "dma_device_type": 2 00:11:45.343 } 00:11:45.343 ], 00:11:45.343 "driver_specific": { 00:11:45.343 "raid": { 00:11:45.343 "uuid": "3264bddd-6fb1-43f8-9feb-906791130b9c", 00:11:45.343 "strip_size_kb": 64, 00:11:45.343 "state": "online", 00:11:45.343 "raid_level": "concat", 00:11:45.343 "superblock": true, 00:11:45.343 "num_base_bdevs": 4, 00:11:45.343 "num_base_bdevs_discovered": 4, 00:11:45.343 "num_base_bdevs_operational": 4, 00:11:45.343 "base_bdevs_list": [ 00:11:45.343 { 00:11:45.343 "name": "pt1", 00:11:45.343 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.343 "is_configured": true, 00:11:45.343 "data_offset": 2048, 00:11:45.343 "data_size": 63488 00:11:45.343 }, 00:11:45.343 { 00:11:45.343 "name": "pt2", 00:11:45.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.343 "is_configured": true, 00:11:45.343 "data_offset": 2048, 00:11:45.343 "data_size": 63488 00:11:45.343 }, 00:11:45.343 { 00:11:45.343 "name": "pt3", 00:11:45.343 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.343 "is_configured": true, 00:11:45.343 "data_offset": 2048, 00:11:45.343 "data_size": 63488 00:11:45.343 }, 00:11:45.343 { 00:11:45.344 "name": "pt4", 00:11:45.344 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.344 "is_configured": true, 00:11:45.344 "data_offset": 2048, 00:11:45.344 "data_size": 63488 00:11:45.344 } 00:11:45.344 ] 00:11:45.344 } 00:11:45.344 } 00:11:45.344 }' 00:11:45.344 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:45.603 pt2 00:11:45.603 pt3 00:11:45.603 pt4' 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.603 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.604 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.604 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.604 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.604 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.604 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:45.604 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:45.604 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.604 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.604 [2024-11-25 15:38:44.266785] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.863 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.863 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3264bddd-6fb1-43f8-9feb-906791130b9c 00:11:45.863 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3264bddd-6fb1-43f8-9feb-906791130b9c ']' 00:11:45.863 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:45.863 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.863 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.863 [2024-11-25 15:38:44.310405] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:45.863 [2024-11-25 15:38:44.310469] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.863 [2024-11-25 15:38:44.310567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.863 [2024-11-25 15:38:44.310643] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:45.863 [2024-11-25 15:38:44.310670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:45.863 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.863 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:45.863 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.863 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.863 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.863 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.863 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:45.863 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:45.863 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:45.863 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:45.863 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.863 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.864 [2024-11-25 15:38:44.470163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:45.864 [2024-11-25 15:38:44.472072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:45.864 [2024-11-25 15:38:44.472171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:45.864 [2024-11-25 15:38:44.472225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:45.864 [2024-11-25 15:38:44.472336] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:45.864 [2024-11-25 15:38:44.472397] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:45.864 [2024-11-25 15:38:44.472420] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:45.864 [2024-11-25 15:38:44.472441] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:45.864 [2024-11-25 15:38:44.472456] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:45.864 [2024-11-25 15:38:44.472482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:45.864 request: 00:11:45.864 { 00:11:45.864 "name": "raid_bdev1", 00:11:45.864 "raid_level": "concat", 00:11:45.864 "base_bdevs": [ 00:11:45.864 "malloc1", 00:11:45.864 "malloc2", 00:11:45.864 "malloc3", 00:11:45.864 "malloc4" 00:11:45.864 ], 00:11:45.864 "strip_size_kb": 64, 00:11:45.864 "superblock": false, 00:11:45.864 "method": "bdev_raid_create", 00:11:45.864 "req_id": 1 00:11:45.864 } 00:11:45.864 Got JSON-RPC error response 00:11:45.864 response: 00:11:45.864 { 00:11:45.864 "code": -17, 00:11:45.864 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:45.864 } 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.864 [2024-11-25 15:38:44.522056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:45.864 [2024-11-25 15:38:44.522148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.864 [2024-11-25 15:38:44.522181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:45.864 [2024-11-25 15:38:44.522210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.864 [2024-11-25 15:38:44.524499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.864 [2024-11-25 15:38:44.524591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:45.864 [2024-11-25 15:38:44.524698] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:45.864 [2024-11-25 15:38:44.524807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:45.864 pt1 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.864 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.130 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.130 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.130 "name": "raid_bdev1", 00:11:46.130 "uuid": "3264bddd-6fb1-43f8-9feb-906791130b9c", 00:11:46.130 "strip_size_kb": 64, 00:11:46.130 "state": "configuring", 00:11:46.130 "raid_level": "concat", 00:11:46.130 "superblock": true, 00:11:46.130 "num_base_bdevs": 4, 00:11:46.130 "num_base_bdevs_discovered": 1, 00:11:46.130 "num_base_bdevs_operational": 4, 00:11:46.130 "base_bdevs_list": [ 00:11:46.130 { 00:11:46.130 "name": "pt1", 00:11:46.130 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.130 "is_configured": true, 00:11:46.130 "data_offset": 2048, 00:11:46.130 "data_size": 63488 00:11:46.130 }, 00:11:46.130 { 00:11:46.130 "name": null, 00:11:46.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.130 "is_configured": false, 00:11:46.130 "data_offset": 2048, 00:11:46.130 "data_size": 63488 00:11:46.130 }, 00:11:46.130 { 00:11:46.130 "name": null, 00:11:46.130 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.130 "is_configured": false, 00:11:46.130 "data_offset": 2048, 00:11:46.130 "data_size": 63488 00:11:46.130 }, 00:11:46.130 { 00:11:46.130 "name": null, 00:11:46.130 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.130 "is_configured": false, 00:11:46.130 "data_offset": 2048, 00:11:46.130 "data_size": 63488 00:11:46.130 } 00:11:46.130 ] 00:11:46.130 }' 00:11:46.130 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.130 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.388 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:46.388 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.389 [2024-11-25 15:38:44.969330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:46.389 [2024-11-25 15:38:44.969417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.389 [2024-11-25 15:38:44.969440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:46.389 [2024-11-25 15:38:44.969453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.389 [2024-11-25 15:38:44.969971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.389 [2024-11-25 15:38:44.969996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:46.389 [2024-11-25 15:38:44.970110] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:46.389 [2024-11-25 15:38:44.970140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:46.389 pt2 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.389 [2024-11-25 15:38:44.981296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.389 15:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.389 15:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.389 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.389 "name": "raid_bdev1", 00:11:46.389 "uuid": "3264bddd-6fb1-43f8-9feb-906791130b9c", 00:11:46.389 "strip_size_kb": 64, 00:11:46.389 "state": "configuring", 00:11:46.389 "raid_level": "concat", 00:11:46.389 "superblock": true, 00:11:46.389 "num_base_bdevs": 4, 00:11:46.389 "num_base_bdevs_discovered": 1, 00:11:46.389 "num_base_bdevs_operational": 4, 00:11:46.389 "base_bdevs_list": [ 00:11:46.389 { 00:11:46.389 "name": "pt1", 00:11:46.389 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.389 "is_configured": true, 00:11:46.389 "data_offset": 2048, 00:11:46.389 "data_size": 63488 00:11:46.389 }, 00:11:46.389 { 00:11:46.389 "name": null, 00:11:46.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.389 "is_configured": false, 00:11:46.389 "data_offset": 0, 00:11:46.389 "data_size": 63488 00:11:46.389 }, 00:11:46.389 { 00:11:46.389 "name": null, 00:11:46.389 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.389 "is_configured": false, 00:11:46.389 "data_offset": 2048, 00:11:46.389 "data_size": 63488 00:11:46.389 }, 00:11:46.389 { 00:11:46.389 "name": null, 00:11:46.389 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.389 "is_configured": false, 00:11:46.389 "data_offset": 2048, 00:11:46.389 "data_size": 63488 00:11:46.389 } 00:11:46.389 ] 00:11:46.389 }' 00:11:46.389 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.389 15:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.957 [2024-11-25 15:38:45.404552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:46.957 [2024-11-25 15:38:45.404690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.957 [2024-11-25 15:38:45.404728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:46.957 [2024-11-25 15:38:45.404756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.957 [2024-11-25 15:38:45.405237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.957 [2024-11-25 15:38:45.405294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:46.957 [2024-11-25 15:38:45.405408] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:46.957 [2024-11-25 15:38:45.405458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:46.957 pt2 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.957 [2024-11-25 15:38:45.416496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:46.957 [2024-11-25 15:38:45.416598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.957 [2024-11-25 15:38:45.416637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:46.957 [2024-11-25 15:38:45.416670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.957 [2024-11-25 15:38:45.417078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.957 [2024-11-25 15:38:45.417131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:46.957 [2024-11-25 15:38:45.417221] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:46.957 [2024-11-25 15:38:45.417267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:46.957 pt3 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.957 [2024-11-25 15:38:45.428455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:46.957 [2024-11-25 15:38:45.428502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.957 [2024-11-25 15:38:45.428519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:46.957 [2024-11-25 15:38:45.428526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.957 [2024-11-25 15:38:45.428856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.957 [2024-11-25 15:38:45.428871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:46.957 [2024-11-25 15:38:45.428932] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:46.957 [2024-11-25 15:38:45.428949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:46.957 [2024-11-25 15:38:45.429088] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:46.957 [2024-11-25 15:38:45.429097] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:46.957 [2024-11-25 15:38:45.429344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:46.957 [2024-11-25 15:38:45.429511] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:46.957 [2024-11-25 15:38:45.429529] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:46.957 [2024-11-25 15:38:45.429666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.957 pt4 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.957 "name": "raid_bdev1", 00:11:46.957 "uuid": "3264bddd-6fb1-43f8-9feb-906791130b9c", 00:11:46.957 "strip_size_kb": 64, 00:11:46.957 "state": "online", 00:11:46.957 "raid_level": "concat", 00:11:46.957 "superblock": true, 00:11:46.957 "num_base_bdevs": 4, 00:11:46.957 "num_base_bdevs_discovered": 4, 00:11:46.957 "num_base_bdevs_operational": 4, 00:11:46.957 "base_bdevs_list": [ 00:11:46.957 { 00:11:46.957 "name": "pt1", 00:11:46.957 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.957 "is_configured": true, 00:11:46.957 "data_offset": 2048, 00:11:46.957 "data_size": 63488 00:11:46.957 }, 00:11:46.957 { 00:11:46.957 "name": "pt2", 00:11:46.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.957 "is_configured": true, 00:11:46.957 "data_offset": 2048, 00:11:46.957 "data_size": 63488 00:11:46.957 }, 00:11:46.957 { 00:11:46.957 "name": "pt3", 00:11:46.957 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.957 "is_configured": true, 00:11:46.957 "data_offset": 2048, 00:11:46.957 "data_size": 63488 00:11:46.957 }, 00:11:46.957 { 00:11:46.957 "name": "pt4", 00:11:46.957 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.957 "is_configured": true, 00:11:46.957 "data_offset": 2048, 00:11:46.957 "data_size": 63488 00:11:46.957 } 00:11:46.957 ] 00:11:46.957 }' 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.957 15:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.216 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:47.216 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:47.216 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:47.216 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:47.216 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:47.216 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:47.216 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:47.216 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:47.216 15:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.216 15:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.216 [2024-11-25 15:38:45.864093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.216 15:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.475 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:47.475 "name": "raid_bdev1", 00:11:47.475 "aliases": [ 00:11:47.475 "3264bddd-6fb1-43f8-9feb-906791130b9c" 00:11:47.475 ], 00:11:47.475 "product_name": "Raid Volume", 00:11:47.475 "block_size": 512, 00:11:47.475 "num_blocks": 253952, 00:11:47.475 "uuid": "3264bddd-6fb1-43f8-9feb-906791130b9c", 00:11:47.475 "assigned_rate_limits": { 00:11:47.475 "rw_ios_per_sec": 0, 00:11:47.475 "rw_mbytes_per_sec": 0, 00:11:47.475 "r_mbytes_per_sec": 0, 00:11:47.475 "w_mbytes_per_sec": 0 00:11:47.475 }, 00:11:47.475 "claimed": false, 00:11:47.475 "zoned": false, 00:11:47.475 "supported_io_types": { 00:11:47.475 "read": true, 00:11:47.475 "write": true, 00:11:47.475 "unmap": true, 00:11:47.475 "flush": true, 00:11:47.475 "reset": true, 00:11:47.475 "nvme_admin": false, 00:11:47.475 "nvme_io": false, 00:11:47.475 "nvme_io_md": false, 00:11:47.475 "write_zeroes": true, 00:11:47.475 "zcopy": false, 00:11:47.475 "get_zone_info": false, 00:11:47.475 "zone_management": false, 00:11:47.475 "zone_append": false, 00:11:47.475 "compare": false, 00:11:47.475 "compare_and_write": false, 00:11:47.475 "abort": false, 00:11:47.475 "seek_hole": false, 00:11:47.475 "seek_data": false, 00:11:47.475 "copy": false, 00:11:47.475 "nvme_iov_md": false 00:11:47.475 }, 00:11:47.475 "memory_domains": [ 00:11:47.475 { 00:11:47.475 "dma_device_id": "system", 00:11:47.475 "dma_device_type": 1 00:11:47.475 }, 00:11:47.475 { 00:11:47.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.475 "dma_device_type": 2 00:11:47.475 }, 00:11:47.475 { 00:11:47.475 "dma_device_id": "system", 00:11:47.475 "dma_device_type": 1 00:11:47.475 }, 00:11:47.475 { 00:11:47.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.475 "dma_device_type": 2 00:11:47.475 }, 00:11:47.475 { 00:11:47.475 "dma_device_id": "system", 00:11:47.475 "dma_device_type": 1 00:11:47.475 }, 00:11:47.475 { 00:11:47.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.475 "dma_device_type": 2 00:11:47.475 }, 00:11:47.475 { 00:11:47.475 "dma_device_id": "system", 00:11:47.475 "dma_device_type": 1 00:11:47.475 }, 00:11:47.475 { 00:11:47.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.475 "dma_device_type": 2 00:11:47.475 } 00:11:47.475 ], 00:11:47.475 "driver_specific": { 00:11:47.475 "raid": { 00:11:47.475 "uuid": "3264bddd-6fb1-43f8-9feb-906791130b9c", 00:11:47.475 "strip_size_kb": 64, 00:11:47.475 "state": "online", 00:11:47.475 "raid_level": "concat", 00:11:47.475 "superblock": true, 00:11:47.475 "num_base_bdevs": 4, 00:11:47.475 "num_base_bdevs_discovered": 4, 00:11:47.475 "num_base_bdevs_operational": 4, 00:11:47.475 "base_bdevs_list": [ 00:11:47.475 { 00:11:47.475 "name": "pt1", 00:11:47.475 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.475 "is_configured": true, 00:11:47.475 "data_offset": 2048, 00:11:47.475 "data_size": 63488 00:11:47.475 }, 00:11:47.475 { 00:11:47.475 "name": "pt2", 00:11:47.475 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.475 "is_configured": true, 00:11:47.475 "data_offset": 2048, 00:11:47.475 "data_size": 63488 00:11:47.475 }, 00:11:47.475 { 00:11:47.475 "name": "pt3", 00:11:47.475 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.475 "is_configured": true, 00:11:47.475 "data_offset": 2048, 00:11:47.475 "data_size": 63488 00:11:47.475 }, 00:11:47.475 { 00:11:47.475 "name": "pt4", 00:11:47.475 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.475 "is_configured": true, 00:11:47.475 "data_offset": 2048, 00:11:47.475 "data_size": 63488 00:11:47.475 } 00:11:47.475 ] 00:11:47.475 } 00:11:47.475 } 00:11:47.475 }' 00:11:47.475 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:47.475 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:47.475 pt2 00:11:47.475 pt3 00:11:47.475 pt4' 00:11:47.475 15:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.475 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:47.475 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.475 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:47.475 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.475 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.475 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.475 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.475 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.475 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.475 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.475 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.475 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:47.475 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.475 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.476 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.476 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.476 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.476 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.476 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.476 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:47.476 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.476 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.476 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.476 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.476 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.476 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.734 [2024-11-25 15:38:46.219416] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3264bddd-6fb1-43f8-9feb-906791130b9c '!=' 3264bddd-6fb1-43f8-9feb-906791130b9c ']' 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72354 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72354 ']' 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72354 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72354 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.734 killing process with pid 72354 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72354' 00:11:47.734 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72354 00:11:47.734 [2024-11-25 15:38:46.290378] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.734 [2024-11-25 15:38:46.290461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.734 [2024-11-25 15:38:46.290535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.734 [2024-11-25 15:38:46.290544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:47.735 15:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72354 00:11:48.302 [2024-11-25 15:38:46.685123] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:49.237 15:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:49.238 00:11:49.238 real 0m5.410s 00:11:49.238 user 0m7.763s 00:11:49.238 sys 0m0.937s 00:11:49.238 15:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.238 15:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.238 ************************************ 00:11:49.238 END TEST raid_superblock_test 00:11:49.238 ************************************ 00:11:49.238 15:38:47 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:49.238 15:38:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:49.238 15:38:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.238 15:38:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:49.238 ************************************ 00:11:49.238 START TEST raid_read_error_test 00:11:49.238 ************************************ 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.44uhKhj3B7 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72613 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72613 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72613 ']' 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.238 15:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.497 [2024-11-25 15:38:47.938862] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:11:49.497 [2024-11-25 15:38:47.939068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72613 ] 00:11:49.497 [2024-11-25 15:38:48.112066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.756 [2024-11-25 15:38:48.220463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.756 [2024-11-25 15:38:48.412700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.756 [2024-11-25 15:38:48.412807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.325 BaseBdev1_malloc 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.325 true 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.325 [2024-11-25 15:38:48.819232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:50.325 [2024-11-25 15:38:48.819286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.325 [2024-11-25 15:38:48.819307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:50.325 [2024-11-25 15:38:48.819318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.325 [2024-11-25 15:38:48.821374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.325 [2024-11-25 15:38:48.821521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:50.325 BaseBdev1 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.325 BaseBdev2_malloc 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.325 true 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.325 [2024-11-25 15:38:48.885082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:50.325 [2024-11-25 15:38:48.885132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.325 [2024-11-25 15:38:48.885165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:50.325 [2024-11-25 15:38:48.885175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.325 [2024-11-25 15:38:48.887135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.325 [2024-11-25 15:38:48.887172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:50.325 BaseBdev2 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.325 BaseBdev3_malloc 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.325 true 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.325 [2024-11-25 15:38:48.963851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:50.325 [2024-11-25 15:38:48.963943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.325 [2024-11-25 15:38:48.963980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:50.325 [2024-11-25 15:38:48.963990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.325 [2024-11-25 15:38:48.966050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.325 [2024-11-25 15:38:48.966129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:50.325 BaseBdev3 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.325 15:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.589 BaseBdev4_malloc 00:11:50.589 15:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.589 15:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:50.589 15:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.589 15:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.589 true 00:11:50.589 15:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.589 15:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:50.589 15:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.589 15:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.589 [2024-11-25 15:38:49.029125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:50.589 [2024-11-25 15:38:49.029173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.589 [2024-11-25 15:38:49.029191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:50.589 [2024-11-25 15:38:49.029201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.589 [2024-11-25 15:38:49.031215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.589 [2024-11-25 15:38:49.031312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:50.589 BaseBdev4 00:11:50.589 15:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.589 15:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:50.589 15:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.589 15:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.589 [2024-11-25 15:38:49.041166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.589 [2024-11-25 15:38:49.042977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.589 [2024-11-25 15:38:49.043064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:50.589 [2024-11-25 15:38:49.043127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:50.589 [2024-11-25 15:38:49.043335] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:50.589 [2024-11-25 15:38:49.043354] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:50.589 [2024-11-25 15:38:49.043584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:50.589 [2024-11-25 15:38:49.043739] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:50.589 [2024-11-25 15:38:49.043750] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:50.589 [2024-11-25 15:38:49.043890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.589 15:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.589 15:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:50.589 15:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.589 15:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.589 15:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.590 15:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.590 15:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.590 15:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.590 15:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.590 15:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.590 15:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.590 15:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.590 15:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.590 15:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.590 15:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.590 15:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.590 15:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.590 "name": "raid_bdev1", 00:11:50.590 "uuid": "d22cf3d4-c0af-4268-9b5f-ec3bb5d196f8", 00:11:50.590 "strip_size_kb": 64, 00:11:50.590 "state": "online", 00:11:50.590 "raid_level": "concat", 00:11:50.590 "superblock": true, 00:11:50.590 "num_base_bdevs": 4, 00:11:50.590 "num_base_bdevs_discovered": 4, 00:11:50.590 "num_base_bdevs_operational": 4, 00:11:50.590 "base_bdevs_list": [ 00:11:50.590 { 00:11:50.590 "name": "BaseBdev1", 00:11:50.590 "uuid": "9ab5935b-4f5f-5e05-adc6-66890e6427b8", 00:11:50.590 "is_configured": true, 00:11:50.590 "data_offset": 2048, 00:11:50.590 "data_size": 63488 00:11:50.590 }, 00:11:50.590 { 00:11:50.590 "name": "BaseBdev2", 00:11:50.590 "uuid": "53eeeff6-21b5-5abb-8659-7f9a29bf5313", 00:11:50.590 "is_configured": true, 00:11:50.590 "data_offset": 2048, 00:11:50.590 "data_size": 63488 00:11:50.590 }, 00:11:50.590 { 00:11:50.590 "name": "BaseBdev3", 00:11:50.590 "uuid": "f71aaf8d-f46e-5674-a567-01bc56fdeb31", 00:11:50.590 "is_configured": true, 00:11:50.590 "data_offset": 2048, 00:11:50.590 "data_size": 63488 00:11:50.590 }, 00:11:50.590 { 00:11:50.590 "name": "BaseBdev4", 00:11:50.590 "uuid": "a2adcd56-eb2e-582e-b120-382ff3d479d8", 00:11:50.590 "is_configured": true, 00:11:50.590 "data_offset": 2048, 00:11:50.590 "data_size": 63488 00:11:50.590 } 00:11:50.590 ] 00:11:50.590 }' 00:11:50.590 15:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.590 15:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.856 15:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:50.856 15:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:51.114 [2024-11-25 15:38:49.625321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.050 "name": "raid_bdev1", 00:11:52.050 "uuid": "d22cf3d4-c0af-4268-9b5f-ec3bb5d196f8", 00:11:52.050 "strip_size_kb": 64, 00:11:52.050 "state": "online", 00:11:52.050 "raid_level": "concat", 00:11:52.050 "superblock": true, 00:11:52.050 "num_base_bdevs": 4, 00:11:52.050 "num_base_bdevs_discovered": 4, 00:11:52.050 "num_base_bdevs_operational": 4, 00:11:52.050 "base_bdevs_list": [ 00:11:52.050 { 00:11:52.050 "name": "BaseBdev1", 00:11:52.050 "uuid": "9ab5935b-4f5f-5e05-adc6-66890e6427b8", 00:11:52.050 "is_configured": true, 00:11:52.050 "data_offset": 2048, 00:11:52.050 "data_size": 63488 00:11:52.050 }, 00:11:52.050 { 00:11:52.050 "name": "BaseBdev2", 00:11:52.050 "uuid": "53eeeff6-21b5-5abb-8659-7f9a29bf5313", 00:11:52.050 "is_configured": true, 00:11:52.050 "data_offset": 2048, 00:11:52.050 "data_size": 63488 00:11:52.050 }, 00:11:52.050 { 00:11:52.050 "name": "BaseBdev3", 00:11:52.050 "uuid": "f71aaf8d-f46e-5674-a567-01bc56fdeb31", 00:11:52.050 "is_configured": true, 00:11:52.050 "data_offset": 2048, 00:11:52.050 "data_size": 63488 00:11:52.050 }, 00:11:52.050 { 00:11:52.050 "name": "BaseBdev4", 00:11:52.050 "uuid": "a2adcd56-eb2e-582e-b120-382ff3d479d8", 00:11:52.050 "is_configured": true, 00:11:52.050 "data_offset": 2048, 00:11:52.050 "data_size": 63488 00:11:52.050 } 00:11:52.050 ] 00:11:52.050 }' 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.050 15:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.618 15:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:52.618 15:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.618 15:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.618 [2024-11-25 15:38:51.047787] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:52.618 [2024-11-25 15:38:51.047899] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:52.618 [2024-11-25 15:38:51.050625] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.618 [2024-11-25 15:38:51.050733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.618 [2024-11-25 15:38:51.050796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.618 [2024-11-25 15:38:51.050870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:52.618 { 00:11:52.618 "results": [ 00:11:52.618 { 00:11:52.618 "job": "raid_bdev1", 00:11:52.618 "core_mask": "0x1", 00:11:52.618 "workload": "randrw", 00:11:52.618 "percentage": 50, 00:11:52.618 "status": "finished", 00:11:52.618 "queue_depth": 1, 00:11:52.618 "io_size": 131072, 00:11:52.618 "runtime": 1.423509, 00:11:52.618 "iops": 16148.826596811119, 00:11:52.618 "mibps": 2018.6033246013899, 00:11:52.618 "io_failed": 1, 00:11:52.618 "io_timeout": 0, 00:11:52.618 "avg_latency_us": 86.21883296758028, 00:11:52.618 "min_latency_us": 25.3764192139738, 00:11:52.618 "max_latency_us": 1459.5353711790392 00:11:52.618 } 00:11:52.618 ], 00:11:52.618 "core_count": 1 00:11:52.618 } 00:11:52.618 15:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.618 15:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72613 00:11:52.618 15:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72613 ']' 00:11:52.618 15:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72613 00:11:52.618 15:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:52.618 15:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.618 15:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72613 00:11:52.618 15:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:52.618 15:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:52.618 15:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72613' 00:11:52.618 killing process with pid 72613 00:11:52.618 15:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72613 00:11:52.618 [2024-11-25 15:38:51.094318] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:52.618 15:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72613 00:11:52.877 [2024-11-25 15:38:51.410765] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:54.253 15:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.44uhKhj3B7 00:11:54.253 15:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:54.253 15:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:54.253 15:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:54.253 15:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:54.253 15:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:54.253 15:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:54.253 15:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:54.253 00:11:54.253 real 0m4.724s 00:11:54.253 user 0m5.630s 00:11:54.253 sys 0m0.582s 00:11:54.253 15:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.253 ************************************ 00:11:54.253 END TEST raid_read_error_test 00:11:54.253 ************************************ 00:11:54.253 15:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.253 15:38:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:54.253 15:38:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:54.253 15:38:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.253 15:38:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:54.253 ************************************ 00:11:54.253 START TEST raid_write_error_test 00:11:54.253 ************************************ 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sxClq0kXon 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72764 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:54.253 15:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72764 00:11:54.254 15:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72764 ']' 00:11:54.254 15:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.254 15:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.254 15:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.254 15:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.254 15:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.254 [2024-11-25 15:38:52.733675] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:11:54.254 [2024-11-25 15:38:52.733804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72764 ] 00:11:54.254 [2024-11-25 15:38:52.904643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.512 [2024-11-25 15:38:53.018991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.771 [2024-11-25 15:38:53.219941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.771 [2024-11-25 15:38:53.219969] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.029 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.030 BaseBdev1_malloc 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.030 true 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.030 [2024-11-25 15:38:53.614874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:55.030 [2024-11-25 15:38:53.614973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.030 [2024-11-25 15:38:53.615017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:55.030 [2024-11-25 15:38:53.615048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.030 [2024-11-25 15:38:53.617135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.030 [2024-11-25 15:38:53.617206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:55.030 BaseBdev1 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.030 BaseBdev2_malloc 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.030 true 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.030 [2024-11-25 15:38:53.678148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:55.030 [2024-11-25 15:38:53.678201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.030 [2024-11-25 15:38:53.678234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:55.030 [2024-11-25 15:38:53.678244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.030 [2024-11-25 15:38:53.680274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.030 [2024-11-25 15:38:53.680325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:55.030 BaseBdev2 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.030 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.289 BaseBdev3_malloc 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.289 true 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.289 [2024-11-25 15:38:53.754631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:55.289 [2024-11-25 15:38:53.754747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.289 [2024-11-25 15:38:53.754782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:55.289 [2024-11-25 15:38:53.754811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.289 [2024-11-25 15:38:53.756871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.289 [2024-11-25 15:38:53.756943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:55.289 BaseBdev3 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.289 BaseBdev4_malloc 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.289 true 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.289 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.289 [2024-11-25 15:38:53.819322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:55.289 [2024-11-25 15:38:53.819373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.289 [2024-11-25 15:38:53.819391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:55.290 [2024-11-25 15:38:53.819401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.290 [2024-11-25 15:38:53.821435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.290 [2024-11-25 15:38:53.821475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:55.290 BaseBdev4 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.290 [2024-11-25 15:38:53.831369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.290 [2024-11-25 15:38:53.833117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.290 [2024-11-25 15:38:53.833182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:55.290 [2024-11-25 15:38:53.833243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:55.290 [2024-11-25 15:38:53.833454] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:55.290 [2024-11-25 15:38:53.833468] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:55.290 [2024-11-25 15:38:53.833700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:55.290 [2024-11-25 15:38:53.833855] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:55.290 [2024-11-25 15:38:53.833865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:55.290 [2024-11-25 15:38:53.834026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.290 "name": "raid_bdev1", 00:11:55.290 "uuid": "2a1a71f8-3602-4dbe-8f6f-a397f8856679", 00:11:55.290 "strip_size_kb": 64, 00:11:55.290 "state": "online", 00:11:55.290 "raid_level": "concat", 00:11:55.290 "superblock": true, 00:11:55.290 "num_base_bdevs": 4, 00:11:55.290 "num_base_bdevs_discovered": 4, 00:11:55.290 "num_base_bdevs_operational": 4, 00:11:55.290 "base_bdevs_list": [ 00:11:55.290 { 00:11:55.290 "name": "BaseBdev1", 00:11:55.290 "uuid": "98188d64-dc45-5883-915a-d4e257b167ab", 00:11:55.290 "is_configured": true, 00:11:55.290 "data_offset": 2048, 00:11:55.290 "data_size": 63488 00:11:55.290 }, 00:11:55.290 { 00:11:55.290 "name": "BaseBdev2", 00:11:55.290 "uuid": "429ae0db-0199-5f27-89ce-61a60075dec2", 00:11:55.290 "is_configured": true, 00:11:55.290 "data_offset": 2048, 00:11:55.290 "data_size": 63488 00:11:55.290 }, 00:11:55.290 { 00:11:55.290 "name": "BaseBdev3", 00:11:55.290 "uuid": "8357f2ab-8fd0-5bc9-af6f-818675120da6", 00:11:55.290 "is_configured": true, 00:11:55.290 "data_offset": 2048, 00:11:55.290 "data_size": 63488 00:11:55.290 }, 00:11:55.290 { 00:11:55.290 "name": "BaseBdev4", 00:11:55.290 "uuid": "7d547b67-94f4-56af-bff2-226afda1c5af", 00:11:55.290 "is_configured": true, 00:11:55.290 "data_offset": 2048, 00:11:55.290 "data_size": 63488 00:11:55.290 } 00:11:55.290 ] 00:11:55.290 }' 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.290 15:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.857 15:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:55.857 15:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:55.857 [2024-11-25 15:38:54.403497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.792 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.792 "name": "raid_bdev1", 00:11:56.792 "uuid": "2a1a71f8-3602-4dbe-8f6f-a397f8856679", 00:11:56.792 "strip_size_kb": 64, 00:11:56.792 "state": "online", 00:11:56.792 "raid_level": "concat", 00:11:56.792 "superblock": true, 00:11:56.792 "num_base_bdevs": 4, 00:11:56.792 "num_base_bdevs_discovered": 4, 00:11:56.792 "num_base_bdevs_operational": 4, 00:11:56.792 "base_bdevs_list": [ 00:11:56.792 { 00:11:56.792 "name": "BaseBdev1", 00:11:56.792 "uuid": "98188d64-dc45-5883-915a-d4e257b167ab", 00:11:56.792 "is_configured": true, 00:11:56.792 "data_offset": 2048, 00:11:56.792 "data_size": 63488 00:11:56.792 }, 00:11:56.792 { 00:11:56.792 "name": "BaseBdev2", 00:11:56.792 "uuid": "429ae0db-0199-5f27-89ce-61a60075dec2", 00:11:56.792 "is_configured": true, 00:11:56.792 "data_offset": 2048, 00:11:56.792 "data_size": 63488 00:11:56.792 }, 00:11:56.792 { 00:11:56.792 "name": "BaseBdev3", 00:11:56.792 "uuid": "8357f2ab-8fd0-5bc9-af6f-818675120da6", 00:11:56.792 "is_configured": true, 00:11:56.792 "data_offset": 2048, 00:11:56.792 "data_size": 63488 00:11:56.792 }, 00:11:56.793 { 00:11:56.793 "name": "BaseBdev4", 00:11:56.793 "uuid": "7d547b67-94f4-56af-bff2-226afda1c5af", 00:11:56.793 "is_configured": true, 00:11:56.793 "data_offset": 2048, 00:11:56.793 "data_size": 63488 00:11:56.793 } 00:11:56.793 ] 00:11:56.793 }' 00:11:56.793 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.793 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.359 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:57.359 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.359 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.359 [2024-11-25 15:38:55.750051] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.359 [2024-11-25 15:38:55.750083] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.359 [2024-11-25 15:38:55.752808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.359 [2024-11-25 15:38:55.752945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.359 [2024-11-25 15:38:55.753000] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.359 [2024-11-25 15:38:55.753018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:57.359 { 00:11:57.359 "results": [ 00:11:57.359 { 00:11:57.359 "job": "raid_bdev1", 00:11:57.359 "core_mask": "0x1", 00:11:57.359 "workload": "randrw", 00:11:57.359 "percentage": 50, 00:11:57.359 "status": "finished", 00:11:57.359 "queue_depth": 1, 00:11:57.359 "io_size": 131072, 00:11:57.359 "runtime": 1.347285, 00:11:57.359 "iops": 16379.607878065888, 00:11:57.359 "mibps": 2047.450984758236, 00:11:57.359 "io_failed": 1, 00:11:57.359 "io_timeout": 0, 00:11:57.359 "avg_latency_us": 84.93836745847335, 00:11:57.359 "min_latency_us": 25.4882096069869, 00:11:57.359 "max_latency_us": 1366.5257641921398 00:11:57.359 } 00:11:57.359 ], 00:11:57.359 "core_count": 1 00:11:57.359 } 00:11:57.359 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.359 15:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72764 00:11:57.359 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72764 ']' 00:11:57.359 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72764 00:11:57.359 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:57.359 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.359 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72764 00:11:57.359 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:57.359 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:57.359 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72764' 00:11:57.359 killing process with pid 72764 00:11:57.359 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72764 00:11:57.359 [2024-11-25 15:38:55.797735] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:57.359 15:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72764 00:11:57.618 [2024-11-25 15:38:56.109110] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:58.993 15:38:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sxClq0kXon 00:11:58.993 15:38:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:58.993 15:38:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:58.993 15:38:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:58.993 15:38:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:58.993 15:38:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:58.993 15:38:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:58.993 ************************************ 00:11:58.993 END TEST raid_write_error_test 00:11:58.993 ************************************ 00:11:58.993 15:38:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:58.993 00:11:58.993 real 0m4.615s 00:11:58.993 user 0m5.486s 00:11:58.993 sys 0m0.557s 00:11:58.993 15:38:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.993 15:38:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.993 15:38:57 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:58.993 15:38:57 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:58.993 15:38:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:58.993 15:38:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.993 15:38:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:58.993 ************************************ 00:11:58.993 START TEST raid_state_function_test 00:11:58.993 ************************************ 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72913 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72913' 00:11:58.993 Process raid pid: 72913 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72913 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72913 ']' 00:11:58.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.993 15:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.993 [2024-11-25 15:38:57.411870] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:11:58.993 [2024-11-25 15:38:57.412015] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.993 [2024-11-25 15:38:57.583907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.252 [2024-11-25 15:38:57.693137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.252 [2024-11-25 15:38:57.895834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.252 [2024-11-25 15:38:57.895872] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.819 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.819 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:59.819 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:59.819 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.819 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.819 [2024-11-25 15:38:58.234230] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:59.819 [2024-11-25 15:38:58.234284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:59.819 [2024-11-25 15:38:58.234294] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:59.819 [2024-11-25 15:38:58.234304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:59.819 [2024-11-25 15:38:58.234310] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:59.819 [2024-11-25 15:38:58.234318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:59.819 [2024-11-25 15:38:58.234325] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:59.819 [2024-11-25 15:38:58.234333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:59.819 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.819 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:59.819 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.819 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.819 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.819 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.819 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.819 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.819 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.819 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.819 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.819 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.819 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.820 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.820 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.820 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.820 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.820 "name": "Existed_Raid", 00:11:59.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.820 "strip_size_kb": 0, 00:11:59.820 "state": "configuring", 00:11:59.820 "raid_level": "raid1", 00:11:59.820 "superblock": false, 00:11:59.820 "num_base_bdevs": 4, 00:11:59.820 "num_base_bdevs_discovered": 0, 00:11:59.820 "num_base_bdevs_operational": 4, 00:11:59.820 "base_bdevs_list": [ 00:11:59.820 { 00:11:59.820 "name": "BaseBdev1", 00:11:59.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.820 "is_configured": false, 00:11:59.820 "data_offset": 0, 00:11:59.820 "data_size": 0 00:11:59.820 }, 00:11:59.820 { 00:11:59.820 "name": "BaseBdev2", 00:11:59.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.820 "is_configured": false, 00:11:59.820 "data_offset": 0, 00:11:59.820 "data_size": 0 00:11:59.820 }, 00:11:59.820 { 00:11:59.820 "name": "BaseBdev3", 00:11:59.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.820 "is_configured": false, 00:11:59.820 "data_offset": 0, 00:11:59.820 "data_size": 0 00:11:59.820 }, 00:11:59.820 { 00:11:59.820 "name": "BaseBdev4", 00:11:59.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.820 "is_configured": false, 00:11:59.820 "data_offset": 0, 00:11:59.820 "data_size": 0 00:11:59.820 } 00:11:59.820 ] 00:11:59.820 }' 00:11:59.820 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.820 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.082 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:00.082 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.082 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.082 [2024-11-25 15:38:58.657464] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:00.082 [2024-11-25 15:38:58.657551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:00.082 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.082 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:00.082 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.082 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.082 [2024-11-25 15:38:58.669425] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:00.082 [2024-11-25 15:38:58.669498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:00.082 [2024-11-25 15:38:58.669541] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:00.082 [2024-11-25 15:38:58.669564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:00.082 [2024-11-25 15:38:58.669582] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:00.082 [2024-11-25 15:38:58.669602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:00.082 [2024-11-25 15:38:58.669619] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:00.082 [2024-11-25 15:38:58.669639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:00.082 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.082 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:00.082 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.082 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.082 [2024-11-25 15:38:58.714242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.082 BaseBdev1 00:12:00.082 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.082 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:00.082 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.083 [ 00:12:00.083 { 00:12:00.083 "name": "BaseBdev1", 00:12:00.083 "aliases": [ 00:12:00.083 "2274ea43-ccf3-4292-8e0d-b6865c80d0db" 00:12:00.083 ], 00:12:00.083 "product_name": "Malloc disk", 00:12:00.083 "block_size": 512, 00:12:00.083 "num_blocks": 65536, 00:12:00.083 "uuid": "2274ea43-ccf3-4292-8e0d-b6865c80d0db", 00:12:00.083 "assigned_rate_limits": { 00:12:00.083 "rw_ios_per_sec": 0, 00:12:00.083 "rw_mbytes_per_sec": 0, 00:12:00.083 "r_mbytes_per_sec": 0, 00:12:00.083 "w_mbytes_per_sec": 0 00:12:00.083 }, 00:12:00.083 "claimed": true, 00:12:00.083 "claim_type": "exclusive_write", 00:12:00.083 "zoned": false, 00:12:00.083 "supported_io_types": { 00:12:00.083 "read": true, 00:12:00.083 "write": true, 00:12:00.083 "unmap": true, 00:12:00.083 "flush": true, 00:12:00.083 "reset": true, 00:12:00.083 "nvme_admin": false, 00:12:00.083 "nvme_io": false, 00:12:00.083 "nvme_io_md": false, 00:12:00.083 "write_zeroes": true, 00:12:00.083 "zcopy": true, 00:12:00.083 "get_zone_info": false, 00:12:00.083 "zone_management": false, 00:12:00.083 "zone_append": false, 00:12:00.083 "compare": false, 00:12:00.083 "compare_and_write": false, 00:12:00.083 "abort": true, 00:12:00.083 "seek_hole": false, 00:12:00.083 "seek_data": false, 00:12:00.083 "copy": true, 00:12:00.083 "nvme_iov_md": false 00:12:00.083 }, 00:12:00.083 "memory_domains": [ 00:12:00.083 { 00:12:00.083 "dma_device_id": "system", 00:12:00.083 "dma_device_type": 1 00:12:00.083 }, 00:12:00.083 { 00:12:00.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.083 "dma_device_type": 2 00:12:00.083 } 00:12:00.083 ], 00:12:00.083 "driver_specific": {} 00:12:00.083 } 00:12:00.083 ] 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.083 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.354 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.354 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.354 "name": "Existed_Raid", 00:12:00.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.354 "strip_size_kb": 0, 00:12:00.354 "state": "configuring", 00:12:00.354 "raid_level": "raid1", 00:12:00.354 "superblock": false, 00:12:00.354 "num_base_bdevs": 4, 00:12:00.354 "num_base_bdevs_discovered": 1, 00:12:00.354 "num_base_bdevs_operational": 4, 00:12:00.354 "base_bdevs_list": [ 00:12:00.354 { 00:12:00.354 "name": "BaseBdev1", 00:12:00.354 "uuid": "2274ea43-ccf3-4292-8e0d-b6865c80d0db", 00:12:00.354 "is_configured": true, 00:12:00.354 "data_offset": 0, 00:12:00.354 "data_size": 65536 00:12:00.354 }, 00:12:00.354 { 00:12:00.354 "name": "BaseBdev2", 00:12:00.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.354 "is_configured": false, 00:12:00.354 "data_offset": 0, 00:12:00.354 "data_size": 0 00:12:00.354 }, 00:12:00.354 { 00:12:00.354 "name": "BaseBdev3", 00:12:00.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.354 "is_configured": false, 00:12:00.354 "data_offset": 0, 00:12:00.354 "data_size": 0 00:12:00.354 }, 00:12:00.354 { 00:12:00.354 "name": "BaseBdev4", 00:12:00.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.354 "is_configured": false, 00:12:00.354 "data_offset": 0, 00:12:00.354 "data_size": 0 00:12:00.354 } 00:12:00.354 ] 00:12:00.354 }' 00:12:00.354 15:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.354 15:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.624 [2024-11-25 15:38:59.165508] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:00.624 [2024-11-25 15:38:59.165639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.624 [2024-11-25 15:38:59.177524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.624 [2024-11-25 15:38:59.179331] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:00.624 [2024-11-25 15:38:59.179375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:00.624 [2024-11-25 15:38:59.179386] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:00.624 [2024-11-25 15:38:59.179397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:00.624 [2024-11-25 15:38:59.179404] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:00.624 [2024-11-25 15:38:59.179412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.624 "name": "Existed_Raid", 00:12:00.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.624 "strip_size_kb": 0, 00:12:00.624 "state": "configuring", 00:12:00.624 "raid_level": "raid1", 00:12:00.624 "superblock": false, 00:12:00.624 "num_base_bdevs": 4, 00:12:00.624 "num_base_bdevs_discovered": 1, 00:12:00.624 "num_base_bdevs_operational": 4, 00:12:00.624 "base_bdevs_list": [ 00:12:00.624 { 00:12:00.624 "name": "BaseBdev1", 00:12:00.624 "uuid": "2274ea43-ccf3-4292-8e0d-b6865c80d0db", 00:12:00.624 "is_configured": true, 00:12:00.624 "data_offset": 0, 00:12:00.624 "data_size": 65536 00:12:00.624 }, 00:12:00.624 { 00:12:00.624 "name": "BaseBdev2", 00:12:00.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.624 "is_configured": false, 00:12:00.624 "data_offset": 0, 00:12:00.624 "data_size": 0 00:12:00.624 }, 00:12:00.624 { 00:12:00.624 "name": "BaseBdev3", 00:12:00.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.624 "is_configured": false, 00:12:00.624 "data_offset": 0, 00:12:00.624 "data_size": 0 00:12:00.624 }, 00:12:00.624 { 00:12:00.624 "name": "BaseBdev4", 00:12:00.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.624 "is_configured": false, 00:12:00.624 "data_offset": 0, 00:12:00.624 "data_size": 0 00:12:00.624 } 00:12:00.624 ] 00:12:00.624 }' 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.624 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.190 [2024-11-25 15:38:59.672064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:01.190 BaseBdev2 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.190 [ 00:12:01.190 { 00:12:01.190 "name": "BaseBdev2", 00:12:01.190 "aliases": [ 00:12:01.190 "b7163910-8802-4f93-ab3f-177d3f3c7627" 00:12:01.190 ], 00:12:01.190 "product_name": "Malloc disk", 00:12:01.190 "block_size": 512, 00:12:01.190 "num_blocks": 65536, 00:12:01.190 "uuid": "b7163910-8802-4f93-ab3f-177d3f3c7627", 00:12:01.190 "assigned_rate_limits": { 00:12:01.190 "rw_ios_per_sec": 0, 00:12:01.190 "rw_mbytes_per_sec": 0, 00:12:01.190 "r_mbytes_per_sec": 0, 00:12:01.190 "w_mbytes_per_sec": 0 00:12:01.190 }, 00:12:01.190 "claimed": true, 00:12:01.190 "claim_type": "exclusive_write", 00:12:01.190 "zoned": false, 00:12:01.190 "supported_io_types": { 00:12:01.190 "read": true, 00:12:01.190 "write": true, 00:12:01.190 "unmap": true, 00:12:01.190 "flush": true, 00:12:01.190 "reset": true, 00:12:01.190 "nvme_admin": false, 00:12:01.190 "nvme_io": false, 00:12:01.190 "nvme_io_md": false, 00:12:01.190 "write_zeroes": true, 00:12:01.190 "zcopy": true, 00:12:01.190 "get_zone_info": false, 00:12:01.190 "zone_management": false, 00:12:01.190 "zone_append": false, 00:12:01.190 "compare": false, 00:12:01.190 "compare_and_write": false, 00:12:01.190 "abort": true, 00:12:01.190 "seek_hole": false, 00:12:01.190 "seek_data": false, 00:12:01.190 "copy": true, 00:12:01.190 "nvme_iov_md": false 00:12:01.190 }, 00:12:01.190 "memory_domains": [ 00:12:01.190 { 00:12:01.190 "dma_device_id": "system", 00:12:01.190 "dma_device_type": 1 00:12:01.190 }, 00:12:01.190 { 00:12:01.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.190 "dma_device_type": 2 00:12:01.190 } 00:12:01.190 ], 00:12:01.190 "driver_specific": {} 00:12:01.190 } 00:12:01.190 ] 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.190 "name": "Existed_Raid", 00:12:01.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.190 "strip_size_kb": 0, 00:12:01.190 "state": "configuring", 00:12:01.190 "raid_level": "raid1", 00:12:01.190 "superblock": false, 00:12:01.190 "num_base_bdevs": 4, 00:12:01.190 "num_base_bdevs_discovered": 2, 00:12:01.190 "num_base_bdevs_operational": 4, 00:12:01.190 "base_bdevs_list": [ 00:12:01.190 { 00:12:01.190 "name": "BaseBdev1", 00:12:01.190 "uuid": "2274ea43-ccf3-4292-8e0d-b6865c80d0db", 00:12:01.190 "is_configured": true, 00:12:01.190 "data_offset": 0, 00:12:01.190 "data_size": 65536 00:12:01.190 }, 00:12:01.190 { 00:12:01.190 "name": "BaseBdev2", 00:12:01.190 "uuid": "b7163910-8802-4f93-ab3f-177d3f3c7627", 00:12:01.190 "is_configured": true, 00:12:01.190 "data_offset": 0, 00:12:01.190 "data_size": 65536 00:12:01.190 }, 00:12:01.190 { 00:12:01.190 "name": "BaseBdev3", 00:12:01.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.190 "is_configured": false, 00:12:01.190 "data_offset": 0, 00:12:01.190 "data_size": 0 00:12:01.190 }, 00:12:01.190 { 00:12:01.190 "name": "BaseBdev4", 00:12:01.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.190 "is_configured": false, 00:12:01.190 "data_offset": 0, 00:12:01.190 "data_size": 0 00:12:01.190 } 00:12:01.190 ] 00:12:01.190 }' 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.190 15:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.754 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:01.754 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.754 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.754 [2024-11-25 15:39:00.215137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:01.754 BaseBdev3 00:12:01.754 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.754 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:01.754 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:01.754 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.754 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:01.754 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.754 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.754 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.754 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.754 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.754 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.754 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:01.754 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.754 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.754 [ 00:12:01.754 { 00:12:01.754 "name": "BaseBdev3", 00:12:01.754 "aliases": [ 00:12:01.754 "0586aac3-a81c-46d1-b9c0-69a3efe082e7" 00:12:01.754 ], 00:12:01.754 "product_name": "Malloc disk", 00:12:01.754 "block_size": 512, 00:12:01.754 "num_blocks": 65536, 00:12:01.754 "uuid": "0586aac3-a81c-46d1-b9c0-69a3efe082e7", 00:12:01.754 "assigned_rate_limits": { 00:12:01.754 "rw_ios_per_sec": 0, 00:12:01.754 "rw_mbytes_per_sec": 0, 00:12:01.754 "r_mbytes_per_sec": 0, 00:12:01.754 "w_mbytes_per_sec": 0 00:12:01.754 }, 00:12:01.754 "claimed": true, 00:12:01.754 "claim_type": "exclusive_write", 00:12:01.754 "zoned": false, 00:12:01.754 "supported_io_types": { 00:12:01.754 "read": true, 00:12:01.754 "write": true, 00:12:01.754 "unmap": true, 00:12:01.754 "flush": true, 00:12:01.754 "reset": true, 00:12:01.754 "nvme_admin": false, 00:12:01.754 "nvme_io": false, 00:12:01.754 "nvme_io_md": false, 00:12:01.754 "write_zeroes": true, 00:12:01.754 "zcopy": true, 00:12:01.754 "get_zone_info": false, 00:12:01.754 "zone_management": false, 00:12:01.754 "zone_append": false, 00:12:01.754 "compare": false, 00:12:01.754 "compare_and_write": false, 00:12:01.754 "abort": true, 00:12:01.754 "seek_hole": false, 00:12:01.754 "seek_data": false, 00:12:01.754 "copy": true, 00:12:01.754 "nvme_iov_md": false 00:12:01.754 }, 00:12:01.754 "memory_domains": [ 00:12:01.754 { 00:12:01.754 "dma_device_id": "system", 00:12:01.754 "dma_device_type": 1 00:12:01.754 }, 00:12:01.754 { 00:12:01.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.754 "dma_device_type": 2 00:12:01.754 } 00:12:01.754 ], 00:12:01.754 "driver_specific": {} 00:12:01.754 } 00:12:01.754 ] 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.755 "name": "Existed_Raid", 00:12:01.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.755 "strip_size_kb": 0, 00:12:01.755 "state": "configuring", 00:12:01.755 "raid_level": "raid1", 00:12:01.755 "superblock": false, 00:12:01.755 "num_base_bdevs": 4, 00:12:01.755 "num_base_bdevs_discovered": 3, 00:12:01.755 "num_base_bdevs_operational": 4, 00:12:01.755 "base_bdevs_list": [ 00:12:01.755 { 00:12:01.755 "name": "BaseBdev1", 00:12:01.755 "uuid": "2274ea43-ccf3-4292-8e0d-b6865c80d0db", 00:12:01.755 "is_configured": true, 00:12:01.755 "data_offset": 0, 00:12:01.755 "data_size": 65536 00:12:01.755 }, 00:12:01.755 { 00:12:01.755 "name": "BaseBdev2", 00:12:01.755 "uuid": "b7163910-8802-4f93-ab3f-177d3f3c7627", 00:12:01.755 "is_configured": true, 00:12:01.755 "data_offset": 0, 00:12:01.755 "data_size": 65536 00:12:01.755 }, 00:12:01.755 { 00:12:01.755 "name": "BaseBdev3", 00:12:01.755 "uuid": "0586aac3-a81c-46d1-b9c0-69a3efe082e7", 00:12:01.755 "is_configured": true, 00:12:01.755 "data_offset": 0, 00:12:01.755 "data_size": 65536 00:12:01.755 }, 00:12:01.755 { 00:12:01.755 "name": "BaseBdev4", 00:12:01.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.755 "is_configured": false, 00:12:01.755 "data_offset": 0, 00:12:01.755 "data_size": 0 00:12:01.755 } 00:12:01.755 ] 00:12:01.755 }' 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.755 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.012 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:02.012 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.012 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.270 [2024-11-25 15:39:00.722180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:02.270 [2024-11-25 15:39:00.722225] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:02.270 [2024-11-25 15:39:00.722233] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:02.270 [2024-11-25 15:39:00.722544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:02.270 [2024-11-25 15:39:00.722719] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:02.270 [2024-11-25 15:39:00.722736] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:02.270 [2024-11-25 15:39:00.723043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.270 BaseBdev4 00:12:02.270 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.270 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:02.270 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:02.270 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.270 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:02.270 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.270 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.270 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.270 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.270 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.270 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.270 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:02.270 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.270 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.270 [ 00:12:02.270 { 00:12:02.270 "name": "BaseBdev4", 00:12:02.270 "aliases": [ 00:12:02.270 "de65815d-cfea-49c4-9466-d5740280555c" 00:12:02.270 ], 00:12:02.270 "product_name": "Malloc disk", 00:12:02.270 "block_size": 512, 00:12:02.270 "num_blocks": 65536, 00:12:02.270 "uuid": "de65815d-cfea-49c4-9466-d5740280555c", 00:12:02.270 "assigned_rate_limits": { 00:12:02.270 "rw_ios_per_sec": 0, 00:12:02.270 "rw_mbytes_per_sec": 0, 00:12:02.270 "r_mbytes_per_sec": 0, 00:12:02.270 "w_mbytes_per_sec": 0 00:12:02.270 }, 00:12:02.270 "claimed": true, 00:12:02.270 "claim_type": "exclusive_write", 00:12:02.270 "zoned": false, 00:12:02.270 "supported_io_types": { 00:12:02.270 "read": true, 00:12:02.270 "write": true, 00:12:02.270 "unmap": true, 00:12:02.270 "flush": true, 00:12:02.270 "reset": true, 00:12:02.270 "nvme_admin": false, 00:12:02.270 "nvme_io": false, 00:12:02.270 "nvme_io_md": false, 00:12:02.270 "write_zeroes": true, 00:12:02.270 "zcopy": true, 00:12:02.270 "get_zone_info": false, 00:12:02.271 "zone_management": false, 00:12:02.271 "zone_append": false, 00:12:02.271 "compare": false, 00:12:02.271 "compare_and_write": false, 00:12:02.271 "abort": true, 00:12:02.271 "seek_hole": false, 00:12:02.271 "seek_data": false, 00:12:02.271 "copy": true, 00:12:02.271 "nvme_iov_md": false 00:12:02.271 }, 00:12:02.271 "memory_domains": [ 00:12:02.271 { 00:12:02.271 "dma_device_id": "system", 00:12:02.271 "dma_device_type": 1 00:12:02.271 }, 00:12:02.271 { 00:12:02.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.271 "dma_device_type": 2 00:12:02.271 } 00:12:02.271 ], 00:12:02.271 "driver_specific": {} 00:12:02.271 } 00:12:02.271 ] 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.271 "name": "Existed_Raid", 00:12:02.271 "uuid": "12fbcf0f-3b45-40fc-94b1-a2001c14a4fd", 00:12:02.271 "strip_size_kb": 0, 00:12:02.271 "state": "online", 00:12:02.271 "raid_level": "raid1", 00:12:02.271 "superblock": false, 00:12:02.271 "num_base_bdevs": 4, 00:12:02.271 "num_base_bdevs_discovered": 4, 00:12:02.271 "num_base_bdevs_operational": 4, 00:12:02.271 "base_bdevs_list": [ 00:12:02.271 { 00:12:02.271 "name": "BaseBdev1", 00:12:02.271 "uuid": "2274ea43-ccf3-4292-8e0d-b6865c80d0db", 00:12:02.271 "is_configured": true, 00:12:02.271 "data_offset": 0, 00:12:02.271 "data_size": 65536 00:12:02.271 }, 00:12:02.271 { 00:12:02.271 "name": "BaseBdev2", 00:12:02.271 "uuid": "b7163910-8802-4f93-ab3f-177d3f3c7627", 00:12:02.271 "is_configured": true, 00:12:02.271 "data_offset": 0, 00:12:02.271 "data_size": 65536 00:12:02.271 }, 00:12:02.271 { 00:12:02.271 "name": "BaseBdev3", 00:12:02.271 "uuid": "0586aac3-a81c-46d1-b9c0-69a3efe082e7", 00:12:02.271 "is_configured": true, 00:12:02.271 "data_offset": 0, 00:12:02.271 "data_size": 65536 00:12:02.271 }, 00:12:02.271 { 00:12:02.271 "name": "BaseBdev4", 00:12:02.271 "uuid": "de65815d-cfea-49c4-9466-d5740280555c", 00:12:02.271 "is_configured": true, 00:12:02.271 "data_offset": 0, 00:12:02.271 "data_size": 65536 00:12:02.271 } 00:12:02.271 ] 00:12:02.271 }' 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.271 15:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.529 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:02.529 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:02.529 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:02.529 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:02.529 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:02.529 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:02.529 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:02.529 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.529 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.529 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:02.529 [2024-11-25 15:39:01.189737] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:02.529 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.787 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:02.787 "name": "Existed_Raid", 00:12:02.787 "aliases": [ 00:12:02.787 "12fbcf0f-3b45-40fc-94b1-a2001c14a4fd" 00:12:02.787 ], 00:12:02.787 "product_name": "Raid Volume", 00:12:02.787 "block_size": 512, 00:12:02.787 "num_blocks": 65536, 00:12:02.787 "uuid": "12fbcf0f-3b45-40fc-94b1-a2001c14a4fd", 00:12:02.787 "assigned_rate_limits": { 00:12:02.787 "rw_ios_per_sec": 0, 00:12:02.787 "rw_mbytes_per_sec": 0, 00:12:02.787 "r_mbytes_per_sec": 0, 00:12:02.787 "w_mbytes_per_sec": 0 00:12:02.787 }, 00:12:02.787 "claimed": false, 00:12:02.787 "zoned": false, 00:12:02.787 "supported_io_types": { 00:12:02.787 "read": true, 00:12:02.787 "write": true, 00:12:02.787 "unmap": false, 00:12:02.787 "flush": false, 00:12:02.787 "reset": true, 00:12:02.787 "nvme_admin": false, 00:12:02.787 "nvme_io": false, 00:12:02.787 "nvme_io_md": false, 00:12:02.787 "write_zeroes": true, 00:12:02.787 "zcopy": false, 00:12:02.787 "get_zone_info": false, 00:12:02.787 "zone_management": false, 00:12:02.787 "zone_append": false, 00:12:02.787 "compare": false, 00:12:02.787 "compare_and_write": false, 00:12:02.787 "abort": false, 00:12:02.787 "seek_hole": false, 00:12:02.787 "seek_data": false, 00:12:02.787 "copy": false, 00:12:02.787 "nvme_iov_md": false 00:12:02.787 }, 00:12:02.787 "memory_domains": [ 00:12:02.787 { 00:12:02.787 "dma_device_id": "system", 00:12:02.787 "dma_device_type": 1 00:12:02.787 }, 00:12:02.787 { 00:12:02.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.787 "dma_device_type": 2 00:12:02.787 }, 00:12:02.787 { 00:12:02.787 "dma_device_id": "system", 00:12:02.787 "dma_device_type": 1 00:12:02.787 }, 00:12:02.787 { 00:12:02.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.787 "dma_device_type": 2 00:12:02.787 }, 00:12:02.787 { 00:12:02.787 "dma_device_id": "system", 00:12:02.787 "dma_device_type": 1 00:12:02.787 }, 00:12:02.787 { 00:12:02.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.787 "dma_device_type": 2 00:12:02.787 }, 00:12:02.787 { 00:12:02.787 "dma_device_id": "system", 00:12:02.787 "dma_device_type": 1 00:12:02.787 }, 00:12:02.787 { 00:12:02.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.787 "dma_device_type": 2 00:12:02.787 } 00:12:02.787 ], 00:12:02.787 "driver_specific": { 00:12:02.787 "raid": { 00:12:02.787 "uuid": "12fbcf0f-3b45-40fc-94b1-a2001c14a4fd", 00:12:02.787 "strip_size_kb": 0, 00:12:02.787 "state": "online", 00:12:02.787 "raid_level": "raid1", 00:12:02.787 "superblock": false, 00:12:02.787 "num_base_bdevs": 4, 00:12:02.787 "num_base_bdevs_discovered": 4, 00:12:02.787 "num_base_bdevs_operational": 4, 00:12:02.787 "base_bdevs_list": [ 00:12:02.787 { 00:12:02.787 "name": "BaseBdev1", 00:12:02.787 "uuid": "2274ea43-ccf3-4292-8e0d-b6865c80d0db", 00:12:02.788 "is_configured": true, 00:12:02.788 "data_offset": 0, 00:12:02.788 "data_size": 65536 00:12:02.788 }, 00:12:02.788 { 00:12:02.788 "name": "BaseBdev2", 00:12:02.788 "uuid": "b7163910-8802-4f93-ab3f-177d3f3c7627", 00:12:02.788 "is_configured": true, 00:12:02.788 "data_offset": 0, 00:12:02.788 "data_size": 65536 00:12:02.788 }, 00:12:02.788 { 00:12:02.788 "name": "BaseBdev3", 00:12:02.788 "uuid": "0586aac3-a81c-46d1-b9c0-69a3efe082e7", 00:12:02.788 "is_configured": true, 00:12:02.788 "data_offset": 0, 00:12:02.788 "data_size": 65536 00:12:02.788 }, 00:12:02.788 { 00:12:02.788 "name": "BaseBdev4", 00:12:02.788 "uuid": "de65815d-cfea-49c4-9466-d5740280555c", 00:12:02.788 "is_configured": true, 00:12:02.788 "data_offset": 0, 00:12:02.788 "data_size": 65536 00:12:02.788 } 00:12:02.788 ] 00:12:02.788 } 00:12:02.788 } 00:12:02.788 }' 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:02.788 BaseBdev2 00:12:02.788 BaseBdev3 00:12:02.788 BaseBdev4' 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.788 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.046 [2024-11-25 15:39:01.500967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.046 "name": "Existed_Raid", 00:12:03.046 "uuid": "12fbcf0f-3b45-40fc-94b1-a2001c14a4fd", 00:12:03.046 "strip_size_kb": 0, 00:12:03.046 "state": "online", 00:12:03.046 "raid_level": "raid1", 00:12:03.046 "superblock": false, 00:12:03.046 "num_base_bdevs": 4, 00:12:03.046 "num_base_bdevs_discovered": 3, 00:12:03.046 "num_base_bdevs_operational": 3, 00:12:03.046 "base_bdevs_list": [ 00:12:03.046 { 00:12:03.046 "name": null, 00:12:03.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.046 "is_configured": false, 00:12:03.046 "data_offset": 0, 00:12:03.046 "data_size": 65536 00:12:03.046 }, 00:12:03.046 { 00:12:03.046 "name": "BaseBdev2", 00:12:03.046 "uuid": "b7163910-8802-4f93-ab3f-177d3f3c7627", 00:12:03.046 "is_configured": true, 00:12:03.046 "data_offset": 0, 00:12:03.046 "data_size": 65536 00:12:03.046 }, 00:12:03.046 { 00:12:03.046 "name": "BaseBdev3", 00:12:03.046 "uuid": "0586aac3-a81c-46d1-b9c0-69a3efe082e7", 00:12:03.046 "is_configured": true, 00:12:03.046 "data_offset": 0, 00:12:03.046 "data_size": 65536 00:12:03.046 }, 00:12:03.046 { 00:12:03.046 "name": "BaseBdev4", 00:12:03.046 "uuid": "de65815d-cfea-49c4-9466-d5740280555c", 00:12:03.046 "is_configured": true, 00:12:03.046 "data_offset": 0, 00:12:03.046 "data_size": 65536 00:12:03.046 } 00:12:03.046 ] 00:12:03.046 }' 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.046 15:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.612 [2024-11-25 15:39:02.085298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.612 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.612 [2024-11-25 15:39:02.236590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.871 [2024-11-25 15:39:02.386927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:03.871 [2024-11-25 15:39:02.387104] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.871 [2024-11-25 15:39:02.479709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.871 [2024-11-25 15:39:02.479805] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.871 [2024-11-25 15:39:02.479847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.871 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.131 BaseBdev2 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.131 [ 00:12:04.131 { 00:12:04.131 "name": "BaseBdev2", 00:12:04.131 "aliases": [ 00:12:04.131 "a71d82e2-a8c7-45ac-8fe7-9eeac5652b25" 00:12:04.131 ], 00:12:04.131 "product_name": "Malloc disk", 00:12:04.131 "block_size": 512, 00:12:04.131 "num_blocks": 65536, 00:12:04.131 "uuid": "a71d82e2-a8c7-45ac-8fe7-9eeac5652b25", 00:12:04.131 "assigned_rate_limits": { 00:12:04.131 "rw_ios_per_sec": 0, 00:12:04.131 "rw_mbytes_per_sec": 0, 00:12:04.131 "r_mbytes_per_sec": 0, 00:12:04.131 "w_mbytes_per_sec": 0 00:12:04.131 }, 00:12:04.131 "claimed": false, 00:12:04.131 "zoned": false, 00:12:04.131 "supported_io_types": { 00:12:04.131 "read": true, 00:12:04.131 "write": true, 00:12:04.131 "unmap": true, 00:12:04.131 "flush": true, 00:12:04.131 "reset": true, 00:12:04.131 "nvme_admin": false, 00:12:04.131 "nvme_io": false, 00:12:04.131 "nvme_io_md": false, 00:12:04.131 "write_zeroes": true, 00:12:04.131 "zcopy": true, 00:12:04.131 "get_zone_info": false, 00:12:04.131 "zone_management": false, 00:12:04.131 "zone_append": false, 00:12:04.131 "compare": false, 00:12:04.131 "compare_and_write": false, 00:12:04.131 "abort": true, 00:12:04.131 "seek_hole": false, 00:12:04.131 "seek_data": false, 00:12:04.131 "copy": true, 00:12:04.131 "nvme_iov_md": false 00:12:04.131 }, 00:12:04.131 "memory_domains": [ 00:12:04.131 { 00:12:04.131 "dma_device_id": "system", 00:12:04.131 "dma_device_type": 1 00:12:04.131 }, 00:12:04.131 { 00:12:04.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.131 "dma_device_type": 2 00:12:04.131 } 00:12:04.131 ], 00:12:04.131 "driver_specific": {} 00:12:04.131 } 00:12:04.131 ] 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.131 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.131 BaseBdev3 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.132 [ 00:12:04.132 { 00:12:04.132 "name": "BaseBdev3", 00:12:04.132 "aliases": [ 00:12:04.132 "e366b2c9-fafc-4398-b0a9-a2cd22a518a4" 00:12:04.132 ], 00:12:04.132 "product_name": "Malloc disk", 00:12:04.132 "block_size": 512, 00:12:04.132 "num_blocks": 65536, 00:12:04.132 "uuid": "e366b2c9-fafc-4398-b0a9-a2cd22a518a4", 00:12:04.132 "assigned_rate_limits": { 00:12:04.132 "rw_ios_per_sec": 0, 00:12:04.132 "rw_mbytes_per_sec": 0, 00:12:04.132 "r_mbytes_per_sec": 0, 00:12:04.132 "w_mbytes_per_sec": 0 00:12:04.132 }, 00:12:04.132 "claimed": false, 00:12:04.132 "zoned": false, 00:12:04.132 "supported_io_types": { 00:12:04.132 "read": true, 00:12:04.132 "write": true, 00:12:04.132 "unmap": true, 00:12:04.132 "flush": true, 00:12:04.132 "reset": true, 00:12:04.132 "nvme_admin": false, 00:12:04.132 "nvme_io": false, 00:12:04.132 "nvme_io_md": false, 00:12:04.132 "write_zeroes": true, 00:12:04.132 "zcopy": true, 00:12:04.132 "get_zone_info": false, 00:12:04.132 "zone_management": false, 00:12:04.132 "zone_append": false, 00:12:04.132 "compare": false, 00:12:04.132 "compare_and_write": false, 00:12:04.132 "abort": true, 00:12:04.132 "seek_hole": false, 00:12:04.132 "seek_data": false, 00:12:04.132 "copy": true, 00:12:04.132 "nvme_iov_md": false 00:12:04.132 }, 00:12:04.132 "memory_domains": [ 00:12:04.132 { 00:12:04.132 "dma_device_id": "system", 00:12:04.132 "dma_device_type": 1 00:12:04.132 }, 00:12:04.132 { 00:12:04.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.132 "dma_device_type": 2 00:12:04.132 } 00:12:04.132 ], 00:12:04.132 "driver_specific": {} 00:12:04.132 } 00:12:04.132 ] 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.132 BaseBdev4 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.132 [ 00:12:04.132 { 00:12:04.132 "name": "BaseBdev4", 00:12:04.132 "aliases": [ 00:12:04.132 "334550ce-d69e-4d10-81aa-18264c8948cb" 00:12:04.132 ], 00:12:04.132 "product_name": "Malloc disk", 00:12:04.132 "block_size": 512, 00:12:04.132 "num_blocks": 65536, 00:12:04.132 "uuid": "334550ce-d69e-4d10-81aa-18264c8948cb", 00:12:04.132 "assigned_rate_limits": { 00:12:04.132 "rw_ios_per_sec": 0, 00:12:04.132 "rw_mbytes_per_sec": 0, 00:12:04.132 "r_mbytes_per_sec": 0, 00:12:04.132 "w_mbytes_per_sec": 0 00:12:04.132 }, 00:12:04.132 "claimed": false, 00:12:04.132 "zoned": false, 00:12:04.132 "supported_io_types": { 00:12:04.132 "read": true, 00:12:04.132 "write": true, 00:12:04.132 "unmap": true, 00:12:04.132 "flush": true, 00:12:04.132 "reset": true, 00:12:04.132 "nvme_admin": false, 00:12:04.132 "nvme_io": false, 00:12:04.132 "nvme_io_md": false, 00:12:04.132 "write_zeroes": true, 00:12:04.132 "zcopy": true, 00:12:04.132 "get_zone_info": false, 00:12:04.132 "zone_management": false, 00:12:04.132 "zone_append": false, 00:12:04.132 "compare": false, 00:12:04.132 "compare_and_write": false, 00:12:04.132 "abort": true, 00:12:04.132 "seek_hole": false, 00:12:04.132 "seek_data": false, 00:12:04.132 "copy": true, 00:12:04.132 "nvme_iov_md": false 00:12:04.132 }, 00:12:04.132 "memory_domains": [ 00:12:04.132 { 00:12:04.132 "dma_device_id": "system", 00:12:04.132 "dma_device_type": 1 00:12:04.132 }, 00:12:04.132 { 00:12:04.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.132 "dma_device_type": 2 00:12:04.132 } 00:12:04.132 ], 00:12:04.132 "driver_specific": {} 00:12:04.132 } 00:12:04.132 ] 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.132 [2024-11-25 15:39:02.774862] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:04.132 [2024-11-25 15:39:02.774954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:04.132 [2024-11-25 15:39:02.774998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:04.132 [2024-11-25 15:39:02.776836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:04.132 [2024-11-25 15:39:02.776927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.132 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.392 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.392 "name": "Existed_Raid", 00:12:04.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.392 "strip_size_kb": 0, 00:12:04.392 "state": "configuring", 00:12:04.392 "raid_level": "raid1", 00:12:04.392 "superblock": false, 00:12:04.392 "num_base_bdevs": 4, 00:12:04.392 "num_base_bdevs_discovered": 3, 00:12:04.392 "num_base_bdevs_operational": 4, 00:12:04.392 "base_bdevs_list": [ 00:12:04.392 { 00:12:04.392 "name": "BaseBdev1", 00:12:04.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.392 "is_configured": false, 00:12:04.392 "data_offset": 0, 00:12:04.392 "data_size": 0 00:12:04.392 }, 00:12:04.392 { 00:12:04.392 "name": "BaseBdev2", 00:12:04.392 "uuid": "a71d82e2-a8c7-45ac-8fe7-9eeac5652b25", 00:12:04.392 "is_configured": true, 00:12:04.392 "data_offset": 0, 00:12:04.392 "data_size": 65536 00:12:04.392 }, 00:12:04.392 { 00:12:04.392 "name": "BaseBdev3", 00:12:04.392 "uuid": "e366b2c9-fafc-4398-b0a9-a2cd22a518a4", 00:12:04.392 "is_configured": true, 00:12:04.392 "data_offset": 0, 00:12:04.392 "data_size": 65536 00:12:04.392 }, 00:12:04.392 { 00:12:04.392 "name": "BaseBdev4", 00:12:04.392 "uuid": "334550ce-d69e-4d10-81aa-18264c8948cb", 00:12:04.392 "is_configured": true, 00:12:04.392 "data_offset": 0, 00:12:04.392 "data_size": 65536 00:12:04.392 } 00:12:04.392 ] 00:12:04.392 }' 00:12:04.392 15:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.392 15:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.652 [2024-11-25 15:39:03.194161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.652 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.652 "name": "Existed_Raid", 00:12:04.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.652 "strip_size_kb": 0, 00:12:04.652 "state": "configuring", 00:12:04.652 "raid_level": "raid1", 00:12:04.652 "superblock": false, 00:12:04.652 "num_base_bdevs": 4, 00:12:04.652 "num_base_bdevs_discovered": 2, 00:12:04.652 "num_base_bdevs_operational": 4, 00:12:04.652 "base_bdevs_list": [ 00:12:04.652 { 00:12:04.652 "name": "BaseBdev1", 00:12:04.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.652 "is_configured": false, 00:12:04.652 "data_offset": 0, 00:12:04.652 "data_size": 0 00:12:04.652 }, 00:12:04.652 { 00:12:04.652 "name": null, 00:12:04.652 "uuid": "a71d82e2-a8c7-45ac-8fe7-9eeac5652b25", 00:12:04.653 "is_configured": false, 00:12:04.653 "data_offset": 0, 00:12:04.653 "data_size": 65536 00:12:04.653 }, 00:12:04.653 { 00:12:04.653 "name": "BaseBdev3", 00:12:04.653 "uuid": "e366b2c9-fafc-4398-b0a9-a2cd22a518a4", 00:12:04.653 "is_configured": true, 00:12:04.653 "data_offset": 0, 00:12:04.653 "data_size": 65536 00:12:04.653 }, 00:12:04.653 { 00:12:04.653 "name": "BaseBdev4", 00:12:04.653 "uuid": "334550ce-d69e-4d10-81aa-18264c8948cb", 00:12:04.653 "is_configured": true, 00:12:04.653 "data_offset": 0, 00:12:04.653 "data_size": 65536 00:12:04.653 } 00:12:04.653 ] 00:12:04.653 }' 00:12:04.653 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.653 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.221 [2024-11-25 15:39:03.737180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.221 BaseBdev1 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.221 [ 00:12:05.221 { 00:12:05.221 "name": "BaseBdev1", 00:12:05.221 "aliases": [ 00:12:05.221 "f1b70bca-d7fe-48d9-a0d3-9cb0d871409f" 00:12:05.221 ], 00:12:05.221 "product_name": "Malloc disk", 00:12:05.221 "block_size": 512, 00:12:05.221 "num_blocks": 65536, 00:12:05.221 "uuid": "f1b70bca-d7fe-48d9-a0d3-9cb0d871409f", 00:12:05.221 "assigned_rate_limits": { 00:12:05.221 "rw_ios_per_sec": 0, 00:12:05.221 "rw_mbytes_per_sec": 0, 00:12:05.221 "r_mbytes_per_sec": 0, 00:12:05.221 "w_mbytes_per_sec": 0 00:12:05.221 }, 00:12:05.221 "claimed": true, 00:12:05.221 "claim_type": "exclusive_write", 00:12:05.221 "zoned": false, 00:12:05.221 "supported_io_types": { 00:12:05.221 "read": true, 00:12:05.221 "write": true, 00:12:05.221 "unmap": true, 00:12:05.221 "flush": true, 00:12:05.221 "reset": true, 00:12:05.221 "nvme_admin": false, 00:12:05.221 "nvme_io": false, 00:12:05.221 "nvme_io_md": false, 00:12:05.221 "write_zeroes": true, 00:12:05.221 "zcopy": true, 00:12:05.221 "get_zone_info": false, 00:12:05.221 "zone_management": false, 00:12:05.221 "zone_append": false, 00:12:05.221 "compare": false, 00:12:05.221 "compare_and_write": false, 00:12:05.221 "abort": true, 00:12:05.221 "seek_hole": false, 00:12:05.221 "seek_data": false, 00:12:05.221 "copy": true, 00:12:05.221 "nvme_iov_md": false 00:12:05.221 }, 00:12:05.221 "memory_domains": [ 00:12:05.221 { 00:12:05.221 "dma_device_id": "system", 00:12:05.221 "dma_device_type": 1 00:12:05.221 }, 00:12:05.221 { 00:12:05.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.221 "dma_device_type": 2 00:12:05.221 } 00:12:05.221 ], 00:12:05.221 "driver_specific": {} 00:12:05.221 } 00:12:05.221 ] 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.221 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.221 "name": "Existed_Raid", 00:12:05.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.221 "strip_size_kb": 0, 00:12:05.221 "state": "configuring", 00:12:05.221 "raid_level": "raid1", 00:12:05.221 "superblock": false, 00:12:05.221 "num_base_bdevs": 4, 00:12:05.221 "num_base_bdevs_discovered": 3, 00:12:05.221 "num_base_bdevs_operational": 4, 00:12:05.221 "base_bdevs_list": [ 00:12:05.221 { 00:12:05.221 "name": "BaseBdev1", 00:12:05.221 "uuid": "f1b70bca-d7fe-48d9-a0d3-9cb0d871409f", 00:12:05.221 "is_configured": true, 00:12:05.221 "data_offset": 0, 00:12:05.221 "data_size": 65536 00:12:05.221 }, 00:12:05.221 { 00:12:05.222 "name": null, 00:12:05.222 "uuid": "a71d82e2-a8c7-45ac-8fe7-9eeac5652b25", 00:12:05.222 "is_configured": false, 00:12:05.222 "data_offset": 0, 00:12:05.222 "data_size": 65536 00:12:05.222 }, 00:12:05.222 { 00:12:05.222 "name": "BaseBdev3", 00:12:05.222 "uuid": "e366b2c9-fafc-4398-b0a9-a2cd22a518a4", 00:12:05.222 "is_configured": true, 00:12:05.222 "data_offset": 0, 00:12:05.222 "data_size": 65536 00:12:05.222 }, 00:12:05.222 { 00:12:05.222 "name": "BaseBdev4", 00:12:05.222 "uuid": "334550ce-d69e-4d10-81aa-18264c8948cb", 00:12:05.222 "is_configured": true, 00:12:05.222 "data_offset": 0, 00:12:05.222 "data_size": 65536 00:12:05.222 } 00:12:05.222 ] 00:12:05.222 }' 00:12:05.222 15:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.222 15:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.789 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.789 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:05.789 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.789 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.789 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.789 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:05.789 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:05.789 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.789 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.790 [2024-11-25 15:39:04.256346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:05.790 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.790 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:05.790 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.790 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.790 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.790 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.790 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.790 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.790 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.790 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.790 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.790 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.790 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.790 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.790 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.790 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.790 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.790 "name": "Existed_Raid", 00:12:05.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.790 "strip_size_kb": 0, 00:12:05.790 "state": "configuring", 00:12:05.790 "raid_level": "raid1", 00:12:05.790 "superblock": false, 00:12:05.790 "num_base_bdevs": 4, 00:12:05.790 "num_base_bdevs_discovered": 2, 00:12:05.790 "num_base_bdevs_operational": 4, 00:12:05.790 "base_bdevs_list": [ 00:12:05.790 { 00:12:05.790 "name": "BaseBdev1", 00:12:05.790 "uuid": "f1b70bca-d7fe-48d9-a0d3-9cb0d871409f", 00:12:05.790 "is_configured": true, 00:12:05.790 "data_offset": 0, 00:12:05.790 "data_size": 65536 00:12:05.790 }, 00:12:05.790 { 00:12:05.790 "name": null, 00:12:05.790 "uuid": "a71d82e2-a8c7-45ac-8fe7-9eeac5652b25", 00:12:05.790 "is_configured": false, 00:12:05.790 "data_offset": 0, 00:12:05.790 "data_size": 65536 00:12:05.790 }, 00:12:05.790 { 00:12:05.790 "name": null, 00:12:05.790 "uuid": "e366b2c9-fafc-4398-b0a9-a2cd22a518a4", 00:12:05.790 "is_configured": false, 00:12:05.790 "data_offset": 0, 00:12:05.790 "data_size": 65536 00:12:05.790 }, 00:12:05.790 { 00:12:05.790 "name": "BaseBdev4", 00:12:05.790 "uuid": "334550ce-d69e-4d10-81aa-18264c8948cb", 00:12:05.790 "is_configured": true, 00:12:05.790 "data_offset": 0, 00:12:05.790 "data_size": 65536 00:12:05.790 } 00:12:05.790 ] 00:12:05.790 }' 00:12:05.790 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.790 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.359 [2024-11-25 15:39:04.783472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.359 "name": "Existed_Raid", 00:12:06.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.359 "strip_size_kb": 0, 00:12:06.359 "state": "configuring", 00:12:06.359 "raid_level": "raid1", 00:12:06.359 "superblock": false, 00:12:06.359 "num_base_bdevs": 4, 00:12:06.359 "num_base_bdevs_discovered": 3, 00:12:06.359 "num_base_bdevs_operational": 4, 00:12:06.359 "base_bdevs_list": [ 00:12:06.359 { 00:12:06.359 "name": "BaseBdev1", 00:12:06.359 "uuid": "f1b70bca-d7fe-48d9-a0d3-9cb0d871409f", 00:12:06.359 "is_configured": true, 00:12:06.359 "data_offset": 0, 00:12:06.359 "data_size": 65536 00:12:06.359 }, 00:12:06.359 { 00:12:06.359 "name": null, 00:12:06.359 "uuid": "a71d82e2-a8c7-45ac-8fe7-9eeac5652b25", 00:12:06.359 "is_configured": false, 00:12:06.359 "data_offset": 0, 00:12:06.359 "data_size": 65536 00:12:06.359 }, 00:12:06.359 { 00:12:06.359 "name": "BaseBdev3", 00:12:06.359 "uuid": "e366b2c9-fafc-4398-b0a9-a2cd22a518a4", 00:12:06.359 "is_configured": true, 00:12:06.359 "data_offset": 0, 00:12:06.359 "data_size": 65536 00:12:06.359 }, 00:12:06.359 { 00:12:06.359 "name": "BaseBdev4", 00:12:06.359 "uuid": "334550ce-d69e-4d10-81aa-18264c8948cb", 00:12:06.359 "is_configured": true, 00:12:06.359 "data_offset": 0, 00:12:06.359 "data_size": 65536 00:12:06.359 } 00:12:06.359 ] 00:12:06.359 }' 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.359 15:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.618 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:06.618 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.619 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.619 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.619 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.619 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:06.619 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:06.619 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.619 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.619 [2024-11-25 15:39:05.278789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:06.877 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.877 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:06.877 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.877 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.877 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.877 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.877 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.877 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.877 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.877 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.877 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.877 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.877 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.877 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.877 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.877 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.877 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.877 "name": "Existed_Raid", 00:12:06.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.877 "strip_size_kb": 0, 00:12:06.877 "state": "configuring", 00:12:06.877 "raid_level": "raid1", 00:12:06.877 "superblock": false, 00:12:06.877 "num_base_bdevs": 4, 00:12:06.877 "num_base_bdevs_discovered": 2, 00:12:06.877 "num_base_bdevs_operational": 4, 00:12:06.877 "base_bdevs_list": [ 00:12:06.877 { 00:12:06.877 "name": null, 00:12:06.877 "uuid": "f1b70bca-d7fe-48d9-a0d3-9cb0d871409f", 00:12:06.877 "is_configured": false, 00:12:06.877 "data_offset": 0, 00:12:06.877 "data_size": 65536 00:12:06.877 }, 00:12:06.877 { 00:12:06.877 "name": null, 00:12:06.877 "uuid": "a71d82e2-a8c7-45ac-8fe7-9eeac5652b25", 00:12:06.877 "is_configured": false, 00:12:06.877 "data_offset": 0, 00:12:06.877 "data_size": 65536 00:12:06.877 }, 00:12:06.877 { 00:12:06.877 "name": "BaseBdev3", 00:12:06.877 "uuid": "e366b2c9-fafc-4398-b0a9-a2cd22a518a4", 00:12:06.877 "is_configured": true, 00:12:06.877 "data_offset": 0, 00:12:06.877 "data_size": 65536 00:12:06.877 }, 00:12:06.877 { 00:12:06.877 "name": "BaseBdev4", 00:12:06.877 "uuid": "334550ce-d69e-4d10-81aa-18264c8948cb", 00:12:06.877 "is_configured": true, 00:12:06.877 "data_offset": 0, 00:12:06.877 "data_size": 65536 00:12:06.877 } 00:12:06.877 ] 00:12:06.877 }' 00:12:06.878 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.878 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.446 [2024-11-25 15:39:05.896052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.446 "name": "Existed_Raid", 00:12:07.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.446 "strip_size_kb": 0, 00:12:07.446 "state": "configuring", 00:12:07.446 "raid_level": "raid1", 00:12:07.446 "superblock": false, 00:12:07.446 "num_base_bdevs": 4, 00:12:07.446 "num_base_bdevs_discovered": 3, 00:12:07.446 "num_base_bdevs_operational": 4, 00:12:07.446 "base_bdevs_list": [ 00:12:07.446 { 00:12:07.446 "name": null, 00:12:07.446 "uuid": "f1b70bca-d7fe-48d9-a0d3-9cb0d871409f", 00:12:07.446 "is_configured": false, 00:12:07.446 "data_offset": 0, 00:12:07.446 "data_size": 65536 00:12:07.446 }, 00:12:07.446 { 00:12:07.446 "name": "BaseBdev2", 00:12:07.446 "uuid": "a71d82e2-a8c7-45ac-8fe7-9eeac5652b25", 00:12:07.446 "is_configured": true, 00:12:07.446 "data_offset": 0, 00:12:07.446 "data_size": 65536 00:12:07.446 }, 00:12:07.446 { 00:12:07.446 "name": "BaseBdev3", 00:12:07.446 "uuid": "e366b2c9-fafc-4398-b0a9-a2cd22a518a4", 00:12:07.446 "is_configured": true, 00:12:07.446 "data_offset": 0, 00:12:07.446 "data_size": 65536 00:12:07.446 }, 00:12:07.446 { 00:12:07.446 "name": "BaseBdev4", 00:12:07.446 "uuid": "334550ce-d69e-4d10-81aa-18264c8948cb", 00:12:07.446 "is_configured": true, 00:12:07.446 "data_offset": 0, 00:12:07.446 "data_size": 65536 00:12:07.446 } 00:12:07.446 ] 00:12:07.446 }' 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.446 15:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.705 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.705 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:07.705 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.705 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.705 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.705 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:07.705 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.705 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.705 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.705 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:07.705 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.705 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f1b70bca-d7fe-48d9-a0d3-9cb0d871409f 00:12:07.705 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.705 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.964 [2024-11-25 15:39:06.415800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:07.964 [2024-11-25 15:39:06.415915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:07.964 [2024-11-25 15:39:06.415941] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:07.964 [2024-11-25 15:39:06.416268] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:07.964 [2024-11-25 15:39:06.416471] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:07.964 [2024-11-25 15:39:06.416485] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:07.964 [2024-11-25 15:39:06.416738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.964 NewBaseBdev 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.964 [ 00:12:07.964 { 00:12:07.964 "name": "NewBaseBdev", 00:12:07.964 "aliases": [ 00:12:07.964 "f1b70bca-d7fe-48d9-a0d3-9cb0d871409f" 00:12:07.964 ], 00:12:07.964 "product_name": "Malloc disk", 00:12:07.964 "block_size": 512, 00:12:07.964 "num_blocks": 65536, 00:12:07.964 "uuid": "f1b70bca-d7fe-48d9-a0d3-9cb0d871409f", 00:12:07.964 "assigned_rate_limits": { 00:12:07.964 "rw_ios_per_sec": 0, 00:12:07.964 "rw_mbytes_per_sec": 0, 00:12:07.964 "r_mbytes_per_sec": 0, 00:12:07.964 "w_mbytes_per_sec": 0 00:12:07.964 }, 00:12:07.964 "claimed": true, 00:12:07.964 "claim_type": "exclusive_write", 00:12:07.964 "zoned": false, 00:12:07.964 "supported_io_types": { 00:12:07.964 "read": true, 00:12:07.964 "write": true, 00:12:07.964 "unmap": true, 00:12:07.964 "flush": true, 00:12:07.964 "reset": true, 00:12:07.964 "nvme_admin": false, 00:12:07.964 "nvme_io": false, 00:12:07.964 "nvme_io_md": false, 00:12:07.964 "write_zeroes": true, 00:12:07.964 "zcopy": true, 00:12:07.964 "get_zone_info": false, 00:12:07.964 "zone_management": false, 00:12:07.964 "zone_append": false, 00:12:07.964 "compare": false, 00:12:07.964 "compare_and_write": false, 00:12:07.964 "abort": true, 00:12:07.964 "seek_hole": false, 00:12:07.964 "seek_data": false, 00:12:07.964 "copy": true, 00:12:07.964 "nvme_iov_md": false 00:12:07.964 }, 00:12:07.964 "memory_domains": [ 00:12:07.964 { 00:12:07.964 "dma_device_id": "system", 00:12:07.964 "dma_device_type": 1 00:12:07.964 }, 00:12:07.964 { 00:12:07.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.964 "dma_device_type": 2 00:12:07.964 } 00:12:07.964 ], 00:12:07.964 "driver_specific": {} 00:12:07.964 } 00:12:07.964 ] 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.964 "name": "Existed_Raid", 00:12:07.964 "uuid": "74a7dc4f-8a94-4cf6-862a-f4e9ad08d0e7", 00:12:07.964 "strip_size_kb": 0, 00:12:07.964 "state": "online", 00:12:07.964 "raid_level": "raid1", 00:12:07.964 "superblock": false, 00:12:07.964 "num_base_bdevs": 4, 00:12:07.964 "num_base_bdevs_discovered": 4, 00:12:07.964 "num_base_bdevs_operational": 4, 00:12:07.964 "base_bdevs_list": [ 00:12:07.964 { 00:12:07.964 "name": "NewBaseBdev", 00:12:07.964 "uuid": "f1b70bca-d7fe-48d9-a0d3-9cb0d871409f", 00:12:07.964 "is_configured": true, 00:12:07.964 "data_offset": 0, 00:12:07.964 "data_size": 65536 00:12:07.964 }, 00:12:07.964 { 00:12:07.964 "name": "BaseBdev2", 00:12:07.964 "uuid": "a71d82e2-a8c7-45ac-8fe7-9eeac5652b25", 00:12:07.964 "is_configured": true, 00:12:07.964 "data_offset": 0, 00:12:07.964 "data_size": 65536 00:12:07.964 }, 00:12:07.964 { 00:12:07.964 "name": "BaseBdev3", 00:12:07.964 "uuid": "e366b2c9-fafc-4398-b0a9-a2cd22a518a4", 00:12:07.964 "is_configured": true, 00:12:07.964 "data_offset": 0, 00:12:07.964 "data_size": 65536 00:12:07.964 }, 00:12:07.964 { 00:12:07.964 "name": "BaseBdev4", 00:12:07.964 "uuid": "334550ce-d69e-4d10-81aa-18264c8948cb", 00:12:07.964 "is_configured": true, 00:12:07.964 "data_offset": 0, 00:12:07.964 "data_size": 65536 00:12:07.964 } 00:12:07.964 ] 00:12:07.964 }' 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.964 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.531 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:08.531 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:08.531 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:08.531 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:08.531 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:08.531 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:08.531 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:08.531 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:08.531 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.531 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.531 [2024-11-25 15:39:06.951298] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.531 15:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.531 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:08.531 "name": "Existed_Raid", 00:12:08.531 "aliases": [ 00:12:08.531 "74a7dc4f-8a94-4cf6-862a-f4e9ad08d0e7" 00:12:08.531 ], 00:12:08.531 "product_name": "Raid Volume", 00:12:08.531 "block_size": 512, 00:12:08.531 "num_blocks": 65536, 00:12:08.531 "uuid": "74a7dc4f-8a94-4cf6-862a-f4e9ad08d0e7", 00:12:08.531 "assigned_rate_limits": { 00:12:08.531 "rw_ios_per_sec": 0, 00:12:08.531 "rw_mbytes_per_sec": 0, 00:12:08.531 "r_mbytes_per_sec": 0, 00:12:08.531 "w_mbytes_per_sec": 0 00:12:08.531 }, 00:12:08.531 "claimed": false, 00:12:08.531 "zoned": false, 00:12:08.531 "supported_io_types": { 00:12:08.531 "read": true, 00:12:08.531 "write": true, 00:12:08.531 "unmap": false, 00:12:08.531 "flush": false, 00:12:08.531 "reset": true, 00:12:08.531 "nvme_admin": false, 00:12:08.531 "nvme_io": false, 00:12:08.531 "nvme_io_md": false, 00:12:08.531 "write_zeroes": true, 00:12:08.531 "zcopy": false, 00:12:08.532 "get_zone_info": false, 00:12:08.532 "zone_management": false, 00:12:08.532 "zone_append": false, 00:12:08.532 "compare": false, 00:12:08.532 "compare_and_write": false, 00:12:08.532 "abort": false, 00:12:08.532 "seek_hole": false, 00:12:08.532 "seek_data": false, 00:12:08.532 "copy": false, 00:12:08.532 "nvme_iov_md": false 00:12:08.532 }, 00:12:08.532 "memory_domains": [ 00:12:08.532 { 00:12:08.532 "dma_device_id": "system", 00:12:08.532 "dma_device_type": 1 00:12:08.532 }, 00:12:08.532 { 00:12:08.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.532 "dma_device_type": 2 00:12:08.532 }, 00:12:08.532 { 00:12:08.532 "dma_device_id": "system", 00:12:08.532 "dma_device_type": 1 00:12:08.532 }, 00:12:08.532 { 00:12:08.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.532 "dma_device_type": 2 00:12:08.532 }, 00:12:08.532 { 00:12:08.532 "dma_device_id": "system", 00:12:08.532 "dma_device_type": 1 00:12:08.532 }, 00:12:08.532 { 00:12:08.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.532 "dma_device_type": 2 00:12:08.532 }, 00:12:08.532 { 00:12:08.532 "dma_device_id": "system", 00:12:08.532 "dma_device_type": 1 00:12:08.532 }, 00:12:08.532 { 00:12:08.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.532 "dma_device_type": 2 00:12:08.532 } 00:12:08.532 ], 00:12:08.532 "driver_specific": { 00:12:08.532 "raid": { 00:12:08.532 "uuid": "74a7dc4f-8a94-4cf6-862a-f4e9ad08d0e7", 00:12:08.532 "strip_size_kb": 0, 00:12:08.532 "state": "online", 00:12:08.532 "raid_level": "raid1", 00:12:08.532 "superblock": false, 00:12:08.532 "num_base_bdevs": 4, 00:12:08.532 "num_base_bdevs_discovered": 4, 00:12:08.532 "num_base_bdevs_operational": 4, 00:12:08.532 "base_bdevs_list": [ 00:12:08.532 { 00:12:08.532 "name": "NewBaseBdev", 00:12:08.532 "uuid": "f1b70bca-d7fe-48d9-a0d3-9cb0d871409f", 00:12:08.532 "is_configured": true, 00:12:08.532 "data_offset": 0, 00:12:08.532 "data_size": 65536 00:12:08.532 }, 00:12:08.532 { 00:12:08.532 "name": "BaseBdev2", 00:12:08.532 "uuid": "a71d82e2-a8c7-45ac-8fe7-9eeac5652b25", 00:12:08.532 "is_configured": true, 00:12:08.532 "data_offset": 0, 00:12:08.532 "data_size": 65536 00:12:08.532 }, 00:12:08.532 { 00:12:08.532 "name": "BaseBdev3", 00:12:08.532 "uuid": "e366b2c9-fafc-4398-b0a9-a2cd22a518a4", 00:12:08.532 "is_configured": true, 00:12:08.532 "data_offset": 0, 00:12:08.532 "data_size": 65536 00:12:08.532 }, 00:12:08.532 { 00:12:08.532 "name": "BaseBdev4", 00:12:08.532 "uuid": "334550ce-d69e-4d10-81aa-18264c8948cb", 00:12:08.532 "is_configured": true, 00:12:08.532 "data_offset": 0, 00:12:08.532 "data_size": 65536 00:12:08.532 } 00:12:08.532 ] 00:12:08.532 } 00:12:08.532 } 00:12:08.532 }' 00:12:08.532 15:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:08.532 BaseBdev2 00:12:08.532 BaseBdev3 00:12:08.532 BaseBdev4' 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.532 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.792 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.792 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.792 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.792 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:08.792 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.792 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.792 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.792 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.792 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.792 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.792 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:08.792 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.792 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.792 [2024-11-25 15:39:07.282376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:08.792 [2024-11-25 15:39:07.282403] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.792 [2024-11-25 15:39:07.282483] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.792 [2024-11-25 15:39:07.282770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.792 [2024-11-25 15:39:07.282784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:08.793 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.793 15:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72913 00:12:08.793 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72913 ']' 00:12:08.793 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72913 00:12:08.793 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:08.793 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.793 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72913 00:12:08.793 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:08.793 killing process with pid 72913 00:12:08.793 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:08.793 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72913' 00:12:08.793 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72913 00:12:08.793 [2024-11-25 15:39:07.321273] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:08.793 15:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72913 00:12:09.053 [2024-11-25 15:39:07.709632] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:10.429 00:12:10.429 real 0m11.495s 00:12:10.429 user 0m18.330s 00:12:10.429 sys 0m2.047s 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.429 ************************************ 00:12:10.429 END TEST raid_state_function_test 00:12:10.429 ************************************ 00:12:10.429 15:39:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:10.429 15:39:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:10.429 15:39:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.429 15:39:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:10.429 ************************************ 00:12:10.429 START TEST raid_state_function_test_sb 00:12:10.429 ************************************ 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73581 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73581' 00:12:10.429 Process raid pid: 73581 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73581 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73581 ']' 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:10.429 15:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.429 [2024-11-25 15:39:08.978070] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:12:10.429 [2024-11-25 15:39:08.978276] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.688 [2024-11-25 15:39:09.151561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.688 [2024-11-25 15:39:09.261571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.946 [2024-11-25 15:39:09.465142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.946 [2024-11-25 15:39:09.465229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.205 15:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.205 15:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:11.205 15:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:11.205 15:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.205 15:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.205 [2024-11-25 15:39:09.843549] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:11.205 [2024-11-25 15:39:09.843603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:11.205 [2024-11-25 15:39:09.843614] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:11.205 [2024-11-25 15:39:09.843623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:11.205 [2024-11-25 15:39:09.843630] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:11.205 [2024-11-25 15:39:09.843639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:11.205 [2024-11-25 15:39:09.843649] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:11.205 [2024-11-25 15:39:09.843658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:11.205 15:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.205 15:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:11.206 15:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.206 15:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.206 15:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.206 15:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.206 15:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.206 15:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.206 15:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.206 15:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.206 15:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.206 15:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.206 15:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.206 15:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.206 15:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.206 15:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.464 15:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.464 "name": "Existed_Raid", 00:12:11.464 "uuid": "db9ded1c-b172-4788-8548-524dd1aed4a3", 00:12:11.464 "strip_size_kb": 0, 00:12:11.464 "state": "configuring", 00:12:11.464 "raid_level": "raid1", 00:12:11.464 "superblock": true, 00:12:11.464 "num_base_bdevs": 4, 00:12:11.464 "num_base_bdevs_discovered": 0, 00:12:11.464 "num_base_bdevs_operational": 4, 00:12:11.464 "base_bdevs_list": [ 00:12:11.464 { 00:12:11.464 "name": "BaseBdev1", 00:12:11.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.464 "is_configured": false, 00:12:11.464 "data_offset": 0, 00:12:11.464 "data_size": 0 00:12:11.464 }, 00:12:11.464 { 00:12:11.464 "name": "BaseBdev2", 00:12:11.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.464 "is_configured": false, 00:12:11.464 "data_offset": 0, 00:12:11.464 "data_size": 0 00:12:11.464 }, 00:12:11.464 { 00:12:11.464 "name": "BaseBdev3", 00:12:11.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.464 "is_configured": false, 00:12:11.464 "data_offset": 0, 00:12:11.464 "data_size": 0 00:12:11.464 }, 00:12:11.464 { 00:12:11.464 "name": "BaseBdev4", 00:12:11.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.464 "is_configured": false, 00:12:11.464 "data_offset": 0, 00:12:11.464 "data_size": 0 00:12:11.465 } 00:12:11.465 ] 00:12:11.465 }' 00:12:11.465 15:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.465 15:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.724 [2024-11-25 15:39:10.290773] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:11.724 [2024-11-25 15:39:10.290878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.724 [2024-11-25 15:39:10.302734] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:11.724 [2024-11-25 15:39:10.302817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:11.724 [2024-11-25 15:39:10.302846] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:11.724 [2024-11-25 15:39:10.302869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:11.724 [2024-11-25 15:39:10.302887] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:11.724 [2024-11-25 15:39:10.302909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:11.724 [2024-11-25 15:39:10.302927] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:11.724 [2024-11-25 15:39:10.302948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.724 [2024-11-25 15:39:10.350929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:11.724 BaseBdev1 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.724 [ 00:12:11.724 { 00:12:11.724 "name": "BaseBdev1", 00:12:11.724 "aliases": [ 00:12:11.724 "63994b64-f868-4378-8da0-5dbd1168ca67" 00:12:11.724 ], 00:12:11.724 "product_name": "Malloc disk", 00:12:11.724 "block_size": 512, 00:12:11.724 "num_blocks": 65536, 00:12:11.724 "uuid": "63994b64-f868-4378-8da0-5dbd1168ca67", 00:12:11.724 "assigned_rate_limits": { 00:12:11.724 "rw_ios_per_sec": 0, 00:12:11.724 "rw_mbytes_per_sec": 0, 00:12:11.724 "r_mbytes_per_sec": 0, 00:12:11.724 "w_mbytes_per_sec": 0 00:12:11.724 }, 00:12:11.724 "claimed": true, 00:12:11.724 "claim_type": "exclusive_write", 00:12:11.724 "zoned": false, 00:12:11.724 "supported_io_types": { 00:12:11.724 "read": true, 00:12:11.724 "write": true, 00:12:11.724 "unmap": true, 00:12:11.724 "flush": true, 00:12:11.724 "reset": true, 00:12:11.724 "nvme_admin": false, 00:12:11.724 "nvme_io": false, 00:12:11.724 "nvme_io_md": false, 00:12:11.724 "write_zeroes": true, 00:12:11.724 "zcopy": true, 00:12:11.724 "get_zone_info": false, 00:12:11.724 "zone_management": false, 00:12:11.724 "zone_append": false, 00:12:11.724 "compare": false, 00:12:11.724 "compare_and_write": false, 00:12:11.724 "abort": true, 00:12:11.724 "seek_hole": false, 00:12:11.724 "seek_data": false, 00:12:11.724 "copy": true, 00:12:11.724 "nvme_iov_md": false 00:12:11.724 }, 00:12:11.724 "memory_domains": [ 00:12:11.724 { 00:12:11.724 "dma_device_id": "system", 00:12:11.724 "dma_device_type": 1 00:12:11.724 }, 00:12:11.724 { 00:12:11.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.724 "dma_device_type": 2 00:12:11.724 } 00:12:11.724 ], 00:12:11.724 "driver_specific": {} 00:12:11.724 } 00:12:11.724 ] 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.724 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.983 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.983 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.983 "name": "Existed_Raid", 00:12:11.983 "uuid": "008166e8-0237-4886-8395-2c7035b52b98", 00:12:11.983 "strip_size_kb": 0, 00:12:11.983 "state": "configuring", 00:12:11.983 "raid_level": "raid1", 00:12:11.983 "superblock": true, 00:12:11.983 "num_base_bdevs": 4, 00:12:11.983 "num_base_bdevs_discovered": 1, 00:12:11.983 "num_base_bdevs_operational": 4, 00:12:11.983 "base_bdevs_list": [ 00:12:11.983 { 00:12:11.983 "name": "BaseBdev1", 00:12:11.983 "uuid": "63994b64-f868-4378-8da0-5dbd1168ca67", 00:12:11.983 "is_configured": true, 00:12:11.983 "data_offset": 2048, 00:12:11.983 "data_size": 63488 00:12:11.983 }, 00:12:11.983 { 00:12:11.983 "name": "BaseBdev2", 00:12:11.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.983 "is_configured": false, 00:12:11.983 "data_offset": 0, 00:12:11.983 "data_size": 0 00:12:11.983 }, 00:12:11.983 { 00:12:11.983 "name": "BaseBdev3", 00:12:11.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.983 "is_configured": false, 00:12:11.983 "data_offset": 0, 00:12:11.983 "data_size": 0 00:12:11.983 }, 00:12:11.983 { 00:12:11.983 "name": "BaseBdev4", 00:12:11.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.983 "is_configured": false, 00:12:11.983 "data_offset": 0, 00:12:11.983 "data_size": 0 00:12:11.983 } 00:12:11.983 ] 00:12:11.983 }' 00:12:11.983 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.983 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.242 [2024-11-25 15:39:10.814181] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:12.242 [2024-11-25 15:39:10.814236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.242 [2024-11-25 15:39:10.826217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:12.242 [2024-11-25 15:39:10.827993] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.242 [2024-11-25 15:39:10.828042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.242 [2024-11-25 15:39:10.828053] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:12.242 [2024-11-25 15:39:10.828065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:12.242 [2024-11-25 15:39:10.828071] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:12.242 [2024-11-25 15:39:10.828080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.242 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.242 "name": "Existed_Raid", 00:12:12.242 "uuid": "15733132-fc4f-4cb2-9cc6-3c2476410b63", 00:12:12.242 "strip_size_kb": 0, 00:12:12.242 "state": "configuring", 00:12:12.242 "raid_level": "raid1", 00:12:12.242 "superblock": true, 00:12:12.242 "num_base_bdevs": 4, 00:12:12.242 "num_base_bdevs_discovered": 1, 00:12:12.242 "num_base_bdevs_operational": 4, 00:12:12.242 "base_bdevs_list": [ 00:12:12.242 { 00:12:12.242 "name": "BaseBdev1", 00:12:12.242 "uuid": "63994b64-f868-4378-8da0-5dbd1168ca67", 00:12:12.242 "is_configured": true, 00:12:12.242 "data_offset": 2048, 00:12:12.242 "data_size": 63488 00:12:12.242 }, 00:12:12.242 { 00:12:12.242 "name": "BaseBdev2", 00:12:12.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.243 "is_configured": false, 00:12:12.243 "data_offset": 0, 00:12:12.243 "data_size": 0 00:12:12.243 }, 00:12:12.243 { 00:12:12.243 "name": "BaseBdev3", 00:12:12.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.243 "is_configured": false, 00:12:12.243 "data_offset": 0, 00:12:12.243 "data_size": 0 00:12:12.243 }, 00:12:12.243 { 00:12:12.243 "name": "BaseBdev4", 00:12:12.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.243 "is_configured": false, 00:12:12.243 "data_offset": 0, 00:12:12.243 "data_size": 0 00:12:12.243 } 00:12:12.243 ] 00:12:12.243 }' 00:12:12.243 15:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.243 15:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.810 [2024-11-25 15:39:11.329963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:12.810 BaseBdev2 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.810 [ 00:12:12.810 { 00:12:12.810 "name": "BaseBdev2", 00:12:12.810 "aliases": [ 00:12:12.810 "7695c063-7e02-4b5d-b1a9-f27f0f44da34" 00:12:12.810 ], 00:12:12.810 "product_name": "Malloc disk", 00:12:12.810 "block_size": 512, 00:12:12.810 "num_blocks": 65536, 00:12:12.810 "uuid": "7695c063-7e02-4b5d-b1a9-f27f0f44da34", 00:12:12.810 "assigned_rate_limits": { 00:12:12.810 "rw_ios_per_sec": 0, 00:12:12.810 "rw_mbytes_per_sec": 0, 00:12:12.810 "r_mbytes_per_sec": 0, 00:12:12.810 "w_mbytes_per_sec": 0 00:12:12.810 }, 00:12:12.810 "claimed": true, 00:12:12.810 "claim_type": "exclusive_write", 00:12:12.810 "zoned": false, 00:12:12.810 "supported_io_types": { 00:12:12.810 "read": true, 00:12:12.810 "write": true, 00:12:12.810 "unmap": true, 00:12:12.810 "flush": true, 00:12:12.810 "reset": true, 00:12:12.810 "nvme_admin": false, 00:12:12.810 "nvme_io": false, 00:12:12.810 "nvme_io_md": false, 00:12:12.810 "write_zeroes": true, 00:12:12.810 "zcopy": true, 00:12:12.810 "get_zone_info": false, 00:12:12.810 "zone_management": false, 00:12:12.810 "zone_append": false, 00:12:12.810 "compare": false, 00:12:12.810 "compare_and_write": false, 00:12:12.810 "abort": true, 00:12:12.810 "seek_hole": false, 00:12:12.810 "seek_data": false, 00:12:12.810 "copy": true, 00:12:12.810 "nvme_iov_md": false 00:12:12.810 }, 00:12:12.810 "memory_domains": [ 00:12:12.810 { 00:12:12.810 "dma_device_id": "system", 00:12:12.810 "dma_device_type": 1 00:12:12.810 }, 00:12:12.810 { 00:12:12.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.810 "dma_device_type": 2 00:12:12.810 } 00:12:12.810 ], 00:12:12.810 "driver_specific": {} 00:12:12.810 } 00:12:12.810 ] 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.810 "name": "Existed_Raid", 00:12:12.810 "uuid": "15733132-fc4f-4cb2-9cc6-3c2476410b63", 00:12:12.810 "strip_size_kb": 0, 00:12:12.810 "state": "configuring", 00:12:12.810 "raid_level": "raid1", 00:12:12.810 "superblock": true, 00:12:12.810 "num_base_bdevs": 4, 00:12:12.810 "num_base_bdevs_discovered": 2, 00:12:12.810 "num_base_bdevs_operational": 4, 00:12:12.810 "base_bdevs_list": [ 00:12:12.810 { 00:12:12.810 "name": "BaseBdev1", 00:12:12.810 "uuid": "63994b64-f868-4378-8da0-5dbd1168ca67", 00:12:12.810 "is_configured": true, 00:12:12.810 "data_offset": 2048, 00:12:12.810 "data_size": 63488 00:12:12.810 }, 00:12:12.810 { 00:12:12.810 "name": "BaseBdev2", 00:12:12.810 "uuid": "7695c063-7e02-4b5d-b1a9-f27f0f44da34", 00:12:12.810 "is_configured": true, 00:12:12.810 "data_offset": 2048, 00:12:12.810 "data_size": 63488 00:12:12.810 }, 00:12:12.810 { 00:12:12.810 "name": "BaseBdev3", 00:12:12.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.810 "is_configured": false, 00:12:12.810 "data_offset": 0, 00:12:12.810 "data_size": 0 00:12:12.810 }, 00:12:12.810 { 00:12:12.810 "name": "BaseBdev4", 00:12:12.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.810 "is_configured": false, 00:12:12.810 "data_offset": 0, 00:12:12.810 "data_size": 0 00:12:12.810 } 00:12:12.810 ] 00:12:12.810 }' 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.810 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 [2024-11-25 15:39:11.872521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:13.379 BaseBdev3 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 [ 00:12:13.379 { 00:12:13.379 "name": "BaseBdev3", 00:12:13.379 "aliases": [ 00:12:13.379 "6e3a9837-1326-485f-89c5-3fe3f76f4e72" 00:12:13.379 ], 00:12:13.379 "product_name": "Malloc disk", 00:12:13.379 "block_size": 512, 00:12:13.379 "num_blocks": 65536, 00:12:13.379 "uuid": "6e3a9837-1326-485f-89c5-3fe3f76f4e72", 00:12:13.379 "assigned_rate_limits": { 00:12:13.379 "rw_ios_per_sec": 0, 00:12:13.379 "rw_mbytes_per_sec": 0, 00:12:13.379 "r_mbytes_per_sec": 0, 00:12:13.379 "w_mbytes_per_sec": 0 00:12:13.379 }, 00:12:13.379 "claimed": true, 00:12:13.379 "claim_type": "exclusive_write", 00:12:13.379 "zoned": false, 00:12:13.379 "supported_io_types": { 00:12:13.379 "read": true, 00:12:13.379 "write": true, 00:12:13.379 "unmap": true, 00:12:13.379 "flush": true, 00:12:13.379 "reset": true, 00:12:13.379 "nvme_admin": false, 00:12:13.379 "nvme_io": false, 00:12:13.379 "nvme_io_md": false, 00:12:13.379 "write_zeroes": true, 00:12:13.379 "zcopy": true, 00:12:13.379 "get_zone_info": false, 00:12:13.379 "zone_management": false, 00:12:13.379 "zone_append": false, 00:12:13.379 "compare": false, 00:12:13.379 "compare_and_write": false, 00:12:13.379 "abort": true, 00:12:13.379 "seek_hole": false, 00:12:13.379 "seek_data": false, 00:12:13.379 "copy": true, 00:12:13.379 "nvme_iov_md": false 00:12:13.379 }, 00:12:13.379 "memory_domains": [ 00:12:13.379 { 00:12:13.379 "dma_device_id": "system", 00:12:13.379 "dma_device_type": 1 00:12:13.379 }, 00:12:13.379 { 00:12:13.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.379 "dma_device_type": 2 00:12:13.379 } 00:12:13.379 ], 00:12:13.379 "driver_specific": {} 00:12:13.379 } 00:12:13.379 ] 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.379 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.380 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.380 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.380 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.380 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.380 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.380 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.380 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.380 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.380 "name": "Existed_Raid", 00:12:13.380 "uuid": "15733132-fc4f-4cb2-9cc6-3c2476410b63", 00:12:13.380 "strip_size_kb": 0, 00:12:13.380 "state": "configuring", 00:12:13.380 "raid_level": "raid1", 00:12:13.380 "superblock": true, 00:12:13.380 "num_base_bdevs": 4, 00:12:13.380 "num_base_bdevs_discovered": 3, 00:12:13.380 "num_base_bdevs_operational": 4, 00:12:13.380 "base_bdevs_list": [ 00:12:13.380 { 00:12:13.380 "name": "BaseBdev1", 00:12:13.380 "uuid": "63994b64-f868-4378-8da0-5dbd1168ca67", 00:12:13.380 "is_configured": true, 00:12:13.380 "data_offset": 2048, 00:12:13.380 "data_size": 63488 00:12:13.380 }, 00:12:13.380 { 00:12:13.380 "name": "BaseBdev2", 00:12:13.380 "uuid": "7695c063-7e02-4b5d-b1a9-f27f0f44da34", 00:12:13.380 "is_configured": true, 00:12:13.380 "data_offset": 2048, 00:12:13.380 "data_size": 63488 00:12:13.380 }, 00:12:13.380 { 00:12:13.380 "name": "BaseBdev3", 00:12:13.380 "uuid": "6e3a9837-1326-485f-89c5-3fe3f76f4e72", 00:12:13.380 "is_configured": true, 00:12:13.380 "data_offset": 2048, 00:12:13.380 "data_size": 63488 00:12:13.380 }, 00:12:13.380 { 00:12:13.380 "name": "BaseBdev4", 00:12:13.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.380 "is_configured": false, 00:12:13.380 "data_offset": 0, 00:12:13.380 "data_size": 0 00:12:13.380 } 00:12:13.380 ] 00:12:13.380 }' 00:12:13.380 15:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.380 15:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.947 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:13.947 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.947 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.947 [2024-11-25 15:39:12.371298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:13.947 [2024-11-25 15:39:12.371637] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:13.947 [2024-11-25 15:39:12.371656] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:13.947 [2024-11-25 15:39:12.372012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:13.947 BaseBdev4 00:12:13.947 [2024-11-25 15:39:12.372280] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:13.947 [2024-11-25 15:39:12.372307] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:13.947 [2024-11-25 15:39:12.372526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.947 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.947 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:13.947 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:13.947 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.947 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:13.947 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.947 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.947 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.947 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.947 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.947 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.947 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:13.947 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.947 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.947 [ 00:12:13.947 { 00:12:13.947 "name": "BaseBdev4", 00:12:13.947 "aliases": [ 00:12:13.947 "09e6a337-5650-46ca-b774-501b7d8548d7" 00:12:13.947 ], 00:12:13.947 "product_name": "Malloc disk", 00:12:13.947 "block_size": 512, 00:12:13.947 "num_blocks": 65536, 00:12:13.947 "uuid": "09e6a337-5650-46ca-b774-501b7d8548d7", 00:12:13.947 "assigned_rate_limits": { 00:12:13.947 "rw_ios_per_sec": 0, 00:12:13.947 "rw_mbytes_per_sec": 0, 00:12:13.947 "r_mbytes_per_sec": 0, 00:12:13.947 "w_mbytes_per_sec": 0 00:12:13.947 }, 00:12:13.947 "claimed": true, 00:12:13.947 "claim_type": "exclusive_write", 00:12:13.947 "zoned": false, 00:12:13.947 "supported_io_types": { 00:12:13.947 "read": true, 00:12:13.947 "write": true, 00:12:13.947 "unmap": true, 00:12:13.947 "flush": true, 00:12:13.948 "reset": true, 00:12:13.948 "nvme_admin": false, 00:12:13.948 "nvme_io": false, 00:12:13.948 "nvme_io_md": false, 00:12:13.948 "write_zeroes": true, 00:12:13.948 "zcopy": true, 00:12:13.948 "get_zone_info": false, 00:12:13.948 "zone_management": false, 00:12:13.948 "zone_append": false, 00:12:13.948 "compare": false, 00:12:13.948 "compare_and_write": false, 00:12:13.948 "abort": true, 00:12:13.948 "seek_hole": false, 00:12:13.948 "seek_data": false, 00:12:13.948 "copy": true, 00:12:13.948 "nvme_iov_md": false 00:12:13.948 }, 00:12:13.948 "memory_domains": [ 00:12:13.948 { 00:12:13.948 "dma_device_id": "system", 00:12:13.948 "dma_device_type": 1 00:12:13.948 }, 00:12:13.948 { 00:12:13.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.948 "dma_device_type": 2 00:12:13.948 } 00:12:13.948 ], 00:12:13.948 "driver_specific": {} 00:12:13.948 } 00:12:13.948 ] 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.948 "name": "Existed_Raid", 00:12:13.948 "uuid": "15733132-fc4f-4cb2-9cc6-3c2476410b63", 00:12:13.948 "strip_size_kb": 0, 00:12:13.948 "state": "online", 00:12:13.948 "raid_level": "raid1", 00:12:13.948 "superblock": true, 00:12:13.948 "num_base_bdevs": 4, 00:12:13.948 "num_base_bdevs_discovered": 4, 00:12:13.948 "num_base_bdevs_operational": 4, 00:12:13.948 "base_bdevs_list": [ 00:12:13.948 { 00:12:13.948 "name": "BaseBdev1", 00:12:13.948 "uuid": "63994b64-f868-4378-8da0-5dbd1168ca67", 00:12:13.948 "is_configured": true, 00:12:13.948 "data_offset": 2048, 00:12:13.948 "data_size": 63488 00:12:13.948 }, 00:12:13.948 { 00:12:13.948 "name": "BaseBdev2", 00:12:13.948 "uuid": "7695c063-7e02-4b5d-b1a9-f27f0f44da34", 00:12:13.948 "is_configured": true, 00:12:13.948 "data_offset": 2048, 00:12:13.948 "data_size": 63488 00:12:13.948 }, 00:12:13.948 { 00:12:13.948 "name": "BaseBdev3", 00:12:13.948 "uuid": "6e3a9837-1326-485f-89c5-3fe3f76f4e72", 00:12:13.948 "is_configured": true, 00:12:13.948 "data_offset": 2048, 00:12:13.948 "data_size": 63488 00:12:13.948 }, 00:12:13.948 { 00:12:13.948 "name": "BaseBdev4", 00:12:13.948 "uuid": "09e6a337-5650-46ca-b774-501b7d8548d7", 00:12:13.948 "is_configured": true, 00:12:13.948 "data_offset": 2048, 00:12:13.948 "data_size": 63488 00:12:13.948 } 00:12:13.948 ] 00:12:13.948 }' 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.948 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.207 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:14.207 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:14.207 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:14.207 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:14.207 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:14.207 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:14.207 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:14.207 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:14.207 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.207 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.207 [2024-11-25 15:39:12.835091] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.207 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.207 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:14.207 "name": "Existed_Raid", 00:12:14.207 "aliases": [ 00:12:14.207 "15733132-fc4f-4cb2-9cc6-3c2476410b63" 00:12:14.207 ], 00:12:14.207 "product_name": "Raid Volume", 00:12:14.207 "block_size": 512, 00:12:14.207 "num_blocks": 63488, 00:12:14.207 "uuid": "15733132-fc4f-4cb2-9cc6-3c2476410b63", 00:12:14.207 "assigned_rate_limits": { 00:12:14.207 "rw_ios_per_sec": 0, 00:12:14.207 "rw_mbytes_per_sec": 0, 00:12:14.207 "r_mbytes_per_sec": 0, 00:12:14.207 "w_mbytes_per_sec": 0 00:12:14.207 }, 00:12:14.207 "claimed": false, 00:12:14.207 "zoned": false, 00:12:14.207 "supported_io_types": { 00:12:14.207 "read": true, 00:12:14.207 "write": true, 00:12:14.207 "unmap": false, 00:12:14.207 "flush": false, 00:12:14.207 "reset": true, 00:12:14.207 "nvme_admin": false, 00:12:14.207 "nvme_io": false, 00:12:14.207 "nvme_io_md": false, 00:12:14.207 "write_zeroes": true, 00:12:14.207 "zcopy": false, 00:12:14.207 "get_zone_info": false, 00:12:14.207 "zone_management": false, 00:12:14.207 "zone_append": false, 00:12:14.207 "compare": false, 00:12:14.207 "compare_and_write": false, 00:12:14.207 "abort": false, 00:12:14.207 "seek_hole": false, 00:12:14.207 "seek_data": false, 00:12:14.207 "copy": false, 00:12:14.207 "nvme_iov_md": false 00:12:14.207 }, 00:12:14.207 "memory_domains": [ 00:12:14.207 { 00:12:14.207 "dma_device_id": "system", 00:12:14.207 "dma_device_type": 1 00:12:14.207 }, 00:12:14.207 { 00:12:14.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.207 "dma_device_type": 2 00:12:14.207 }, 00:12:14.207 { 00:12:14.207 "dma_device_id": "system", 00:12:14.207 "dma_device_type": 1 00:12:14.207 }, 00:12:14.207 { 00:12:14.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.207 "dma_device_type": 2 00:12:14.207 }, 00:12:14.207 { 00:12:14.207 "dma_device_id": "system", 00:12:14.207 "dma_device_type": 1 00:12:14.207 }, 00:12:14.207 { 00:12:14.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.207 "dma_device_type": 2 00:12:14.207 }, 00:12:14.207 { 00:12:14.207 "dma_device_id": "system", 00:12:14.207 "dma_device_type": 1 00:12:14.207 }, 00:12:14.207 { 00:12:14.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.208 "dma_device_type": 2 00:12:14.208 } 00:12:14.208 ], 00:12:14.208 "driver_specific": { 00:12:14.208 "raid": { 00:12:14.208 "uuid": "15733132-fc4f-4cb2-9cc6-3c2476410b63", 00:12:14.208 "strip_size_kb": 0, 00:12:14.208 "state": "online", 00:12:14.208 "raid_level": "raid1", 00:12:14.208 "superblock": true, 00:12:14.208 "num_base_bdevs": 4, 00:12:14.208 "num_base_bdevs_discovered": 4, 00:12:14.208 "num_base_bdevs_operational": 4, 00:12:14.208 "base_bdevs_list": [ 00:12:14.208 { 00:12:14.208 "name": "BaseBdev1", 00:12:14.208 "uuid": "63994b64-f868-4378-8da0-5dbd1168ca67", 00:12:14.208 "is_configured": true, 00:12:14.208 "data_offset": 2048, 00:12:14.208 "data_size": 63488 00:12:14.208 }, 00:12:14.208 { 00:12:14.208 "name": "BaseBdev2", 00:12:14.208 "uuid": "7695c063-7e02-4b5d-b1a9-f27f0f44da34", 00:12:14.208 "is_configured": true, 00:12:14.208 "data_offset": 2048, 00:12:14.208 "data_size": 63488 00:12:14.208 }, 00:12:14.208 { 00:12:14.208 "name": "BaseBdev3", 00:12:14.208 "uuid": "6e3a9837-1326-485f-89c5-3fe3f76f4e72", 00:12:14.208 "is_configured": true, 00:12:14.208 "data_offset": 2048, 00:12:14.208 "data_size": 63488 00:12:14.208 }, 00:12:14.208 { 00:12:14.208 "name": "BaseBdev4", 00:12:14.208 "uuid": "09e6a337-5650-46ca-b774-501b7d8548d7", 00:12:14.208 "is_configured": true, 00:12:14.208 "data_offset": 2048, 00:12:14.208 "data_size": 63488 00:12:14.208 } 00:12:14.208 ] 00:12:14.208 } 00:12:14.208 } 00:12:14.208 }' 00:12:14.208 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:14.466 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:14.466 BaseBdev2 00:12:14.466 BaseBdev3 00:12:14.466 BaseBdev4' 00:12:14.466 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.466 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:14.466 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.466 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:14.466 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.466 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.466 15:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.466 15:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.466 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.466 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.466 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.466 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.466 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:14.466 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.466 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.466 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.466 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.466 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.467 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.467 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:14.467 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.467 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.467 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.467 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.467 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.467 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.467 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.467 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:14.467 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.467 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.467 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.467 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.467 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.467 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.467 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:14.467 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.467 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.726 [2024-11-25 15:39:13.146245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.726 "name": "Existed_Raid", 00:12:14.726 "uuid": "15733132-fc4f-4cb2-9cc6-3c2476410b63", 00:12:14.726 "strip_size_kb": 0, 00:12:14.726 "state": "online", 00:12:14.726 "raid_level": "raid1", 00:12:14.726 "superblock": true, 00:12:14.726 "num_base_bdevs": 4, 00:12:14.726 "num_base_bdevs_discovered": 3, 00:12:14.726 "num_base_bdevs_operational": 3, 00:12:14.726 "base_bdevs_list": [ 00:12:14.726 { 00:12:14.726 "name": null, 00:12:14.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.726 "is_configured": false, 00:12:14.726 "data_offset": 0, 00:12:14.726 "data_size": 63488 00:12:14.726 }, 00:12:14.726 { 00:12:14.726 "name": "BaseBdev2", 00:12:14.726 "uuid": "7695c063-7e02-4b5d-b1a9-f27f0f44da34", 00:12:14.726 "is_configured": true, 00:12:14.726 "data_offset": 2048, 00:12:14.726 "data_size": 63488 00:12:14.726 }, 00:12:14.726 { 00:12:14.726 "name": "BaseBdev3", 00:12:14.726 "uuid": "6e3a9837-1326-485f-89c5-3fe3f76f4e72", 00:12:14.726 "is_configured": true, 00:12:14.726 "data_offset": 2048, 00:12:14.726 "data_size": 63488 00:12:14.726 }, 00:12:14.726 { 00:12:14.726 "name": "BaseBdev4", 00:12:14.726 "uuid": "09e6a337-5650-46ca-b774-501b7d8548d7", 00:12:14.726 "is_configured": true, 00:12:14.726 "data_offset": 2048, 00:12:14.726 "data_size": 63488 00:12:14.726 } 00:12:14.726 ] 00:12:14.726 }' 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.726 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.295 [2024-11-25 15:39:13.761171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.295 15:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.295 [2024-11-25 15:39:13.927312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:15.554 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.554 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:15.554 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:15.554 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.554 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:15.554 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.554 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.554 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.554 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:15.554 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:15.554 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:15.554 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.554 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.554 [2024-11-25 15:39:14.076926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:15.554 [2024-11-25 15:39:14.077054] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.554 [2024-11-25 15:39:14.167848] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.554 [2024-11-25 15:39:14.167915] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.554 [2024-11-25 15:39:14.167927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:15.554 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.554 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:15.554 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:15.554 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.555 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.555 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.555 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:15.555 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.555 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:15.555 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:15.555 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:15.555 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:15.555 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:15.555 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:15.555 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.555 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.814 BaseBdev2 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.814 [ 00:12:15.814 { 00:12:15.814 "name": "BaseBdev2", 00:12:15.814 "aliases": [ 00:12:15.814 "8582cc28-708a-4594-b61d-ffb648c3d4d2" 00:12:15.814 ], 00:12:15.814 "product_name": "Malloc disk", 00:12:15.814 "block_size": 512, 00:12:15.814 "num_blocks": 65536, 00:12:15.814 "uuid": "8582cc28-708a-4594-b61d-ffb648c3d4d2", 00:12:15.814 "assigned_rate_limits": { 00:12:15.814 "rw_ios_per_sec": 0, 00:12:15.814 "rw_mbytes_per_sec": 0, 00:12:15.814 "r_mbytes_per_sec": 0, 00:12:15.814 "w_mbytes_per_sec": 0 00:12:15.814 }, 00:12:15.814 "claimed": false, 00:12:15.814 "zoned": false, 00:12:15.814 "supported_io_types": { 00:12:15.814 "read": true, 00:12:15.814 "write": true, 00:12:15.814 "unmap": true, 00:12:15.814 "flush": true, 00:12:15.814 "reset": true, 00:12:15.814 "nvme_admin": false, 00:12:15.814 "nvme_io": false, 00:12:15.814 "nvme_io_md": false, 00:12:15.814 "write_zeroes": true, 00:12:15.814 "zcopy": true, 00:12:15.814 "get_zone_info": false, 00:12:15.814 "zone_management": false, 00:12:15.814 "zone_append": false, 00:12:15.814 "compare": false, 00:12:15.814 "compare_and_write": false, 00:12:15.814 "abort": true, 00:12:15.814 "seek_hole": false, 00:12:15.814 "seek_data": false, 00:12:15.814 "copy": true, 00:12:15.814 "nvme_iov_md": false 00:12:15.814 }, 00:12:15.814 "memory_domains": [ 00:12:15.814 { 00:12:15.814 "dma_device_id": "system", 00:12:15.814 "dma_device_type": 1 00:12:15.814 }, 00:12:15.814 { 00:12:15.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.814 "dma_device_type": 2 00:12:15.814 } 00:12:15.814 ], 00:12:15.814 "driver_specific": {} 00:12:15.814 } 00:12:15.814 ] 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.814 BaseBdev3 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.814 [ 00:12:15.814 { 00:12:15.814 "name": "BaseBdev3", 00:12:15.814 "aliases": [ 00:12:15.814 "64281776-661f-4185-8378-16d98abd7b0f" 00:12:15.814 ], 00:12:15.814 "product_name": "Malloc disk", 00:12:15.814 "block_size": 512, 00:12:15.814 "num_blocks": 65536, 00:12:15.814 "uuid": "64281776-661f-4185-8378-16d98abd7b0f", 00:12:15.814 "assigned_rate_limits": { 00:12:15.814 "rw_ios_per_sec": 0, 00:12:15.814 "rw_mbytes_per_sec": 0, 00:12:15.814 "r_mbytes_per_sec": 0, 00:12:15.814 "w_mbytes_per_sec": 0 00:12:15.814 }, 00:12:15.814 "claimed": false, 00:12:15.814 "zoned": false, 00:12:15.814 "supported_io_types": { 00:12:15.814 "read": true, 00:12:15.814 "write": true, 00:12:15.814 "unmap": true, 00:12:15.814 "flush": true, 00:12:15.814 "reset": true, 00:12:15.814 "nvme_admin": false, 00:12:15.814 "nvme_io": false, 00:12:15.814 "nvme_io_md": false, 00:12:15.814 "write_zeroes": true, 00:12:15.814 "zcopy": true, 00:12:15.814 "get_zone_info": false, 00:12:15.814 "zone_management": false, 00:12:15.814 "zone_append": false, 00:12:15.814 "compare": false, 00:12:15.814 "compare_and_write": false, 00:12:15.814 "abort": true, 00:12:15.814 "seek_hole": false, 00:12:15.814 "seek_data": false, 00:12:15.814 "copy": true, 00:12:15.814 "nvme_iov_md": false 00:12:15.814 }, 00:12:15.814 "memory_domains": [ 00:12:15.814 { 00:12:15.814 "dma_device_id": "system", 00:12:15.814 "dma_device_type": 1 00:12:15.814 }, 00:12:15.814 { 00:12:15.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.814 "dma_device_type": 2 00:12:15.814 } 00:12:15.814 ], 00:12:15.814 "driver_specific": {} 00:12:15.814 } 00:12:15.814 ] 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:15.814 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.815 BaseBdev4 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.815 [ 00:12:15.815 { 00:12:15.815 "name": "BaseBdev4", 00:12:15.815 "aliases": [ 00:12:15.815 "488a900b-fe31-458a-b152-6b97b714c6c5" 00:12:15.815 ], 00:12:15.815 "product_name": "Malloc disk", 00:12:15.815 "block_size": 512, 00:12:15.815 "num_blocks": 65536, 00:12:15.815 "uuid": "488a900b-fe31-458a-b152-6b97b714c6c5", 00:12:15.815 "assigned_rate_limits": { 00:12:15.815 "rw_ios_per_sec": 0, 00:12:15.815 "rw_mbytes_per_sec": 0, 00:12:15.815 "r_mbytes_per_sec": 0, 00:12:15.815 "w_mbytes_per_sec": 0 00:12:15.815 }, 00:12:15.815 "claimed": false, 00:12:15.815 "zoned": false, 00:12:15.815 "supported_io_types": { 00:12:15.815 "read": true, 00:12:15.815 "write": true, 00:12:15.815 "unmap": true, 00:12:15.815 "flush": true, 00:12:15.815 "reset": true, 00:12:15.815 "nvme_admin": false, 00:12:15.815 "nvme_io": false, 00:12:15.815 "nvme_io_md": false, 00:12:15.815 "write_zeroes": true, 00:12:15.815 "zcopy": true, 00:12:15.815 "get_zone_info": false, 00:12:15.815 "zone_management": false, 00:12:15.815 "zone_append": false, 00:12:15.815 "compare": false, 00:12:15.815 "compare_and_write": false, 00:12:15.815 "abort": true, 00:12:15.815 "seek_hole": false, 00:12:15.815 "seek_data": false, 00:12:15.815 "copy": true, 00:12:15.815 "nvme_iov_md": false 00:12:15.815 }, 00:12:15.815 "memory_domains": [ 00:12:15.815 { 00:12:15.815 "dma_device_id": "system", 00:12:15.815 "dma_device_type": 1 00:12:15.815 }, 00:12:15.815 { 00:12:15.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.815 "dma_device_type": 2 00:12:15.815 } 00:12:15.815 ], 00:12:15.815 "driver_specific": {} 00:12:15.815 } 00:12:15.815 ] 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.815 [2024-11-25 15:39:14.464132] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:15.815 [2024-11-25 15:39:14.464194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:15.815 [2024-11-25 15:39:14.464219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.815 [2024-11-25 15:39:14.466061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:15.815 [2024-11-25 15:39:14.466109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.815 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.074 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.074 "name": "Existed_Raid", 00:12:16.074 "uuid": "e68eafd2-b416-41ff-8ab1-281e4c63dcaf", 00:12:16.074 "strip_size_kb": 0, 00:12:16.074 "state": "configuring", 00:12:16.074 "raid_level": "raid1", 00:12:16.074 "superblock": true, 00:12:16.074 "num_base_bdevs": 4, 00:12:16.074 "num_base_bdevs_discovered": 3, 00:12:16.074 "num_base_bdevs_operational": 4, 00:12:16.074 "base_bdevs_list": [ 00:12:16.074 { 00:12:16.074 "name": "BaseBdev1", 00:12:16.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.074 "is_configured": false, 00:12:16.074 "data_offset": 0, 00:12:16.074 "data_size": 0 00:12:16.074 }, 00:12:16.074 { 00:12:16.074 "name": "BaseBdev2", 00:12:16.074 "uuid": "8582cc28-708a-4594-b61d-ffb648c3d4d2", 00:12:16.074 "is_configured": true, 00:12:16.074 "data_offset": 2048, 00:12:16.074 "data_size": 63488 00:12:16.074 }, 00:12:16.074 { 00:12:16.074 "name": "BaseBdev3", 00:12:16.074 "uuid": "64281776-661f-4185-8378-16d98abd7b0f", 00:12:16.074 "is_configured": true, 00:12:16.074 "data_offset": 2048, 00:12:16.074 "data_size": 63488 00:12:16.074 }, 00:12:16.074 { 00:12:16.074 "name": "BaseBdev4", 00:12:16.074 "uuid": "488a900b-fe31-458a-b152-6b97b714c6c5", 00:12:16.074 "is_configured": true, 00:12:16.074 "data_offset": 2048, 00:12:16.074 "data_size": 63488 00:12:16.074 } 00:12:16.074 ] 00:12:16.074 }' 00:12:16.074 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.074 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.333 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:16.333 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.333 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.333 [2024-11-25 15:39:14.919326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:16.333 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.333 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:16.333 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.333 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.333 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.334 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.334 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.334 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.334 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.334 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.334 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.334 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.334 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.334 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.334 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.334 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.334 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.334 "name": "Existed_Raid", 00:12:16.334 "uuid": "e68eafd2-b416-41ff-8ab1-281e4c63dcaf", 00:12:16.334 "strip_size_kb": 0, 00:12:16.334 "state": "configuring", 00:12:16.334 "raid_level": "raid1", 00:12:16.334 "superblock": true, 00:12:16.334 "num_base_bdevs": 4, 00:12:16.334 "num_base_bdevs_discovered": 2, 00:12:16.334 "num_base_bdevs_operational": 4, 00:12:16.334 "base_bdevs_list": [ 00:12:16.334 { 00:12:16.334 "name": "BaseBdev1", 00:12:16.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.334 "is_configured": false, 00:12:16.334 "data_offset": 0, 00:12:16.334 "data_size": 0 00:12:16.334 }, 00:12:16.334 { 00:12:16.334 "name": null, 00:12:16.334 "uuid": "8582cc28-708a-4594-b61d-ffb648c3d4d2", 00:12:16.334 "is_configured": false, 00:12:16.334 "data_offset": 0, 00:12:16.334 "data_size": 63488 00:12:16.334 }, 00:12:16.334 { 00:12:16.334 "name": "BaseBdev3", 00:12:16.334 "uuid": "64281776-661f-4185-8378-16d98abd7b0f", 00:12:16.334 "is_configured": true, 00:12:16.334 "data_offset": 2048, 00:12:16.334 "data_size": 63488 00:12:16.334 }, 00:12:16.334 { 00:12:16.334 "name": "BaseBdev4", 00:12:16.334 "uuid": "488a900b-fe31-458a-b152-6b97b714c6c5", 00:12:16.334 "is_configured": true, 00:12:16.334 "data_offset": 2048, 00:12:16.334 "data_size": 63488 00:12:16.334 } 00:12:16.334 ] 00:12:16.334 }' 00:12:16.334 15:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.334 15:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.908 [2024-11-25 15:39:15.417679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.908 BaseBdev1 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.908 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.908 [ 00:12:16.908 { 00:12:16.908 "name": "BaseBdev1", 00:12:16.908 "aliases": [ 00:12:16.908 "81ec16a1-0d03-4106-a6ad-ca863792d616" 00:12:16.908 ], 00:12:16.908 "product_name": "Malloc disk", 00:12:16.908 "block_size": 512, 00:12:16.908 "num_blocks": 65536, 00:12:16.908 "uuid": "81ec16a1-0d03-4106-a6ad-ca863792d616", 00:12:16.908 "assigned_rate_limits": { 00:12:16.908 "rw_ios_per_sec": 0, 00:12:16.908 "rw_mbytes_per_sec": 0, 00:12:16.908 "r_mbytes_per_sec": 0, 00:12:16.908 "w_mbytes_per_sec": 0 00:12:16.908 }, 00:12:16.908 "claimed": true, 00:12:16.908 "claim_type": "exclusive_write", 00:12:16.908 "zoned": false, 00:12:16.908 "supported_io_types": { 00:12:16.908 "read": true, 00:12:16.908 "write": true, 00:12:16.908 "unmap": true, 00:12:16.908 "flush": true, 00:12:16.908 "reset": true, 00:12:16.908 "nvme_admin": false, 00:12:16.908 "nvme_io": false, 00:12:16.908 "nvme_io_md": false, 00:12:16.908 "write_zeroes": true, 00:12:16.909 "zcopy": true, 00:12:16.909 "get_zone_info": false, 00:12:16.909 "zone_management": false, 00:12:16.909 "zone_append": false, 00:12:16.909 "compare": false, 00:12:16.909 "compare_and_write": false, 00:12:16.909 "abort": true, 00:12:16.909 "seek_hole": false, 00:12:16.909 "seek_data": false, 00:12:16.909 "copy": true, 00:12:16.909 "nvme_iov_md": false 00:12:16.909 }, 00:12:16.909 "memory_domains": [ 00:12:16.909 { 00:12:16.909 "dma_device_id": "system", 00:12:16.909 "dma_device_type": 1 00:12:16.909 }, 00:12:16.909 { 00:12:16.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.909 "dma_device_type": 2 00:12:16.909 } 00:12:16.909 ], 00:12:16.909 "driver_specific": {} 00:12:16.909 } 00:12:16.909 ] 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.909 "name": "Existed_Raid", 00:12:16.909 "uuid": "e68eafd2-b416-41ff-8ab1-281e4c63dcaf", 00:12:16.909 "strip_size_kb": 0, 00:12:16.909 "state": "configuring", 00:12:16.909 "raid_level": "raid1", 00:12:16.909 "superblock": true, 00:12:16.909 "num_base_bdevs": 4, 00:12:16.909 "num_base_bdevs_discovered": 3, 00:12:16.909 "num_base_bdevs_operational": 4, 00:12:16.909 "base_bdevs_list": [ 00:12:16.909 { 00:12:16.909 "name": "BaseBdev1", 00:12:16.909 "uuid": "81ec16a1-0d03-4106-a6ad-ca863792d616", 00:12:16.909 "is_configured": true, 00:12:16.909 "data_offset": 2048, 00:12:16.909 "data_size": 63488 00:12:16.909 }, 00:12:16.909 { 00:12:16.909 "name": null, 00:12:16.909 "uuid": "8582cc28-708a-4594-b61d-ffb648c3d4d2", 00:12:16.909 "is_configured": false, 00:12:16.909 "data_offset": 0, 00:12:16.909 "data_size": 63488 00:12:16.909 }, 00:12:16.909 { 00:12:16.909 "name": "BaseBdev3", 00:12:16.909 "uuid": "64281776-661f-4185-8378-16d98abd7b0f", 00:12:16.909 "is_configured": true, 00:12:16.909 "data_offset": 2048, 00:12:16.909 "data_size": 63488 00:12:16.909 }, 00:12:16.909 { 00:12:16.909 "name": "BaseBdev4", 00:12:16.909 "uuid": "488a900b-fe31-458a-b152-6b97b714c6c5", 00:12:16.909 "is_configured": true, 00:12:16.909 "data_offset": 2048, 00:12:16.909 "data_size": 63488 00:12:16.909 } 00:12:16.909 ] 00:12:16.909 }' 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.909 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.182 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.182 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.182 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.182 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.441 [2024-11-25 15:39:15.908915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.441 "name": "Existed_Raid", 00:12:17.441 "uuid": "e68eafd2-b416-41ff-8ab1-281e4c63dcaf", 00:12:17.441 "strip_size_kb": 0, 00:12:17.441 "state": "configuring", 00:12:17.441 "raid_level": "raid1", 00:12:17.441 "superblock": true, 00:12:17.441 "num_base_bdevs": 4, 00:12:17.441 "num_base_bdevs_discovered": 2, 00:12:17.441 "num_base_bdevs_operational": 4, 00:12:17.441 "base_bdevs_list": [ 00:12:17.441 { 00:12:17.441 "name": "BaseBdev1", 00:12:17.441 "uuid": "81ec16a1-0d03-4106-a6ad-ca863792d616", 00:12:17.441 "is_configured": true, 00:12:17.441 "data_offset": 2048, 00:12:17.441 "data_size": 63488 00:12:17.441 }, 00:12:17.441 { 00:12:17.441 "name": null, 00:12:17.441 "uuid": "8582cc28-708a-4594-b61d-ffb648c3d4d2", 00:12:17.441 "is_configured": false, 00:12:17.441 "data_offset": 0, 00:12:17.441 "data_size": 63488 00:12:17.441 }, 00:12:17.441 { 00:12:17.441 "name": null, 00:12:17.441 "uuid": "64281776-661f-4185-8378-16d98abd7b0f", 00:12:17.441 "is_configured": false, 00:12:17.441 "data_offset": 0, 00:12:17.441 "data_size": 63488 00:12:17.441 }, 00:12:17.441 { 00:12:17.441 "name": "BaseBdev4", 00:12:17.441 "uuid": "488a900b-fe31-458a-b152-6b97b714c6c5", 00:12:17.441 "is_configured": true, 00:12:17.441 "data_offset": 2048, 00:12:17.441 "data_size": 63488 00:12:17.441 } 00:12:17.441 ] 00:12:17.441 }' 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.441 15:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.700 [2024-11-25 15:39:16.372092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.700 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.959 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.959 15:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.959 15:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.959 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.959 15:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.959 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.959 "name": "Existed_Raid", 00:12:17.959 "uuid": "e68eafd2-b416-41ff-8ab1-281e4c63dcaf", 00:12:17.959 "strip_size_kb": 0, 00:12:17.959 "state": "configuring", 00:12:17.959 "raid_level": "raid1", 00:12:17.959 "superblock": true, 00:12:17.959 "num_base_bdevs": 4, 00:12:17.959 "num_base_bdevs_discovered": 3, 00:12:17.959 "num_base_bdevs_operational": 4, 00:12:17.959 "base_bdevs_list": [ 00:12:17.959 { 00:12:17.959 "name": "BaseBdev1", 00:12:17.959 "uuid": "81ec16a1-0d03-4106-a6ad-ca863792d616", 00:12:17.959 "is_configured": true, 00:12:17.959 "data_offset": 2048, 00:12:17.959 "data_size": 63488 00:12:17.959 }, 00:12:17.959 { 00:12:17.959 "name": null, 00:12:17.959 "uuid": "8582cc28-708a-4594-b61d-ffb648c3d4d2", 00:12:17.959 "is_configured": false, 00:12:17.959 "data_offset": 0, 00:12:17.959 "data_size": 63488 00:12:17.959 }, 00:12:17.959 { 00:12:17.959 "name": "BaseBdev3", 00:12:17.959 "uuid": "64281776-661f-4185-8378-16d98abd7b0f", 00:12:17.959 "is_configured": true, 00:12:17.959 "data_offset": 2048, 00:12:17.959 "data_size": 63488 00:12:17.959 }, 00:12:17.959 { 00:12:17.959 "name": "BaseBdev4", 00:12:17.959 "uuid": "488a900b-fe31-458a-b152-6b97b714c6c5", 00:12:17.959 "is_configured": true, 00:12:17.959 "data_offset": 2048, 00:12:17.959 "data_size": 63488 00:12:17.959 } 00:12:17.959 ] 00:12:17.959 }' 00:12:17.959 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.959 15:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.217 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.217 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:18.217 15:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.217 15:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.217 15:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.217 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:18.217 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:18.217 15:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.217 15:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.476 [2024-11-25 15:39:16.895218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:18.476 15:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.476 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.476 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.476 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.476 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.476 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.476 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.476 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.476 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.476 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.476 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.476 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.476 15:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.476 15:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.476 15:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.476 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.476 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.476 "name": "Existed_Raid", 00:12:18.476 "uuid": "e68eafd2-b416-41ff-8ab1-281e4c63dcaf", 00:12:18.476 "strip_size_kb": 0, 00:12:18.476 "state": "configuring", 00:12:18.476 "raid_level": "raid1", 00:12:18.476 "superblock": true, 00:12:18.476 "num_base_bdevs": 4, 00:12:18.476 "num_base_bdevs_discovered": 2, 00:12:18.476 "num_base_bdevs_operational": 4, 00:12:18.476 "base_bdevs_list": [ 00:12:18.476 { 00:12:18.476 "name": null, 00:12:18.476 "uuid": "81ec16a1-0d03-4106-a6ad-ca863792d616", 00:12:18.476 "is_configured": false, 00:12:18.476 "data_offset": 0, 00:12:18.476 "data_size": 63488 00:12:18.476 }, 00:12:18.476 { 00:12:18.476 "name": null, 00:12:18.476 "uuid": "8582cc28-708a-4594-b61d-ffb648c3d4d2", 00:12:18.476 "is_configured": false, 00:12:18.476 "data_offset": 0, 00:12:18.476 "data_size": 63488 00:12:18.476 }, 00:12:18.476 { 00:12:18.476 "name": "BaseBdev3", 00:12:18.476 "uuid": "64281776-661f-4185-8378-16d98abd7b0f", 00:12:18.476 "is_configured": true, 00:12:18.476 "data_offset": 2048, 00:12:18.476 "data_size": 63488 00:12:18.476 }, 00:12:18.476 { 00:12:18.476 "name": "BaseBdev4", 00:12:18.476 "uuid": "488a900b-fe31-458a-b152-6b97b714c6c5", 00:12:18.476 "is_configured": true, 00:12:18.476 "data_offset": 2048, 00:12:18.476 "data_size": 63488 00:12:18.476 } 00:12:18.476 ] 00:12:18.476 }' 00:12:18.476 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.476 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.043 [2024-11-25 15:39:17.498147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.043 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.044 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.044 "name": "Existed_Raid", 00:12:19.044 "uuid": "e68eafd2-b416-41ff-8ab1-281e4c63dcaf", 00:12:19.044 "strip_size_kb": 0, 00:12:19.044 "state": "configuring", 00:12:19.044 "raid_level": "raid1", 00:12:19.044 "superblock": true, 00:12:19.044 "num_base_bdevs": 4, 00:12:19.044 "num_base_bdevs_discovered": 3, 00:12:19.044 "num_base_bdevs_operational": 4, 00:12:19.044 "base_bdevs_list": [ 00:12:19.044 { 00:12:19.044 "name": null, 00:12:19.044 "uuid": "81ec16a1-0d03-4106-a6ad-ca863792d616", 00:12:19.044 "is_configured": false, 00:12:19.044 "data_offset": 0, 00:12:19.044 "data_size": 63488 00:12:19.044 }, 00:12:19.044 { 00:12:19.044 "name": "BaseBdev2", 00:12:19.044 "uuid": "8582cc28-708a-4594-b61d-ffb648c3d4d2", 00:12:19.044 "is_configured": true, 00:12:19.044 "data_offset": 2048, 00:12:19.044 "data_size": 63488 00:12:19.044 }, 00:12:19.044 { 00:12:19.044 "name": "BaseBdev3", 00:12:19.044 "uuid": "64281776-661f-4185-8378-16d98abd7b0f", 00:12:19.044 "is_configured": true, 00:12:19.044 "data_offset": 2048, 00:12:19.044 "data_size": 63488 00:12:19.044 }, 00:12:19.044 { 00:12:19.044 "name": "BaseBdev4", 00:12:19.044 "uuid": "488a900b-fe31-458a-b152-6b97b714c6c5", 00:12:19.044 "is_configured": true, 00:12:19.044 "data_offset": 2048, 00:12:19.044 "data_size": 63488 00:12:19.044 } 00:12:19.044 ] 00:12:19.044 }' 00:12:19.044 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.044 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.302 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.302 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.302 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.302 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:19.302 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.303 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:19.303 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:19.303 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.303 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.303 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.303 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.561 15:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 81ec16a1-0d03-4106-a6ad-ca863792d616 00:12:19.561 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.561 15:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.561 [2024-11-25 15:39:18.020950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:19.561 [2024-11-25 15:39:18.021234] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:19.561 [2024-11-25 15:39:18.021252] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:19.561 [2024-11-25 15:39:18.021516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:19.561 [2024-11-25 15:39:18.021674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:19.561 [2024-11-25 15:39:18.021684] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:19.561 NewBaseBdev 00:12:19.561 [2024-11-25 15:39:18.021835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.561 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.561 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:19.561 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:19.561 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:19.561 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:19.561 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:19.561 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:19.561 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:19.561 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.561 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.561 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.561 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:19.561 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.561 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.561 [ 00:12:19.561 { 00:12:19.561 "name": "NewBaseBdev", 00:12:19.561 "aliases": [ 00:12:19.561 "81ec16a1-0d03-4106-a6ad-ca863792d616" 00:12:19.561 ], 00:12:19.561 "product_name": "Malloc disk", 00:12:19.561 "block_size": 512, 00:12:19.561 "num_blocks": 65536, 00:12:19.561 "uuid": "81ec16a1-0d03-4106-a6ad-ca863792d616", 00:12:19.561 "assigned_rate_limits": { 00:12:19.561 "rw_ios_per_sec": 0, 00:12:19.561 "rw_mbytes_per_sec": 0, 00:12:19.561 "r_mbytes_per_sec": 0, 00:12:19.561 "w_mbytes_per_sec": 0 00:12:19.561 }, 00:12:19.561 "claimed": true, 00:12:19.561 "claim_type": "exclusive_write", 00:12:19.561 "zoned": false, 00:12:19.561 "supported_io_types": { 00:12:19.561 "read": true, 00:12:19.561 "write": true, 00:12:19.561 "unmap": true, 00:12:19.561 "flush": true, 00:12:19.561 "reset": true, 00:12:19.561 "nvme_admin": false, 00:12:19.561 "nvme_io": false, 00:12:19.561 "nvme_io_md": false, 00:12:19.561 "write_zeroes": true, 00:12:19.561 "zcopy": true, 00:12:19.561 "get_zone_info": false, 00:12:19.561 "zone_management": false, 00:12:19.561 "zone_append": false, 00:12:19.561 "compare": false, 00:12:19.561 "compare_and_write": false, 00:12:19.561 "abort": true, 00:12:19.561 "seek_hole": false, 00:12:19.561 "seek_data": false, 00:12:19.561 "copy": true, 00:12:19.561 "nvme_iov_md": false 00:12:19.561 }, 00:12:19.561 "memory_domains": [ 00:12:19.561 { 00:12:19.561 "dma_device_id": "system", 00:12:19.561 "dma_device_type": 1 00:12:19.561 }, 00:12:19.561 { 00:12:19.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.561 "dma_device_type": 2 00:12:19.561 } 00:12:19.562 ], 00:12:19.562 "driver_specific": {} 00:12:19.562 } 00:12:19.562 ] 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.562 "name": "Existed_Raid", 00:12:19.562 "uuid": "e68eafd2-b416-41ff-8ab1-281e4c63dcaf", 00:12:19.562 "strip_size_kb": 0, 00:12:19.562 "state": "online", 00:12:19.562 "raid_level": "raid1", 00:12:19.562 "superblock": true, 00:12:19.562 "num_base_bdevs": 4, 00:12:19.562 "num_base_bdevs_discovered": 4, 00:12:19.562 "num_base_bdevs_operational": 4, 00:12:19.562 "base_bdevs_list": [ 00:12:19.562 { 00:12:19.562 "name": "NewBaseBdev", 00:12:19.562 "uuid": "81ec16a1-0d03-4106-a6ad-ca863792d616", 00:12:19.562 "is_configured": true, 00:12:19.562 "data_offset": 2048, 00:12:19.562 "data_size": 63488 00:12:19.562 }, 00:12:19.562 { 00:12:19.562 "name": "BaseBdev2", 00:12:19.562 "uuid": "8582cc28-708a-4594-b61d-ffb648c3d4d2", 00:12:19.562 "is_configured": true, 00:12:19.562 "data_offset": 2048, 00:12:19.562 "data_size": 63488 00:12:19.562 }, 00:12:19.562 { 00:12:19.562 "name": "BaseBdev3", 00:12:19.562 "uuid": "64281776-661f-4185-8378-16d98abd7b0f", 00:12:19.562 "is_configured": true, 00:12:19.562 "data_offset": 2048, 00:12:19.562 "data_size": 63488 00:12:19.562 }, 00:12:19.562 { 00:12:19.562 "name": "BaseBdev4", 00:12:19.562 "uuid": "488a900b-fe31-458a-b152-6b97b714c6c5", 00:12:19.562 "is_configured": true, 00:12:19.562 "data_offset": 2048, 00:12:19.562 "data_size": 63488 00:12:19.562 } 00:12:19.562 ] 00:12:19.562 }' 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.562 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.821 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:19.821 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:19.821 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:19.821 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:19.821 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:19.821 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:19.821 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:19.821 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:19.821 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.821 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.821 [2024-11-25 15:39:18.452571] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.821 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.821 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:19.821 "name": "Existed_Raid", 00:12:19.821 "aliases": [ 00:12:19.821 "e68eafd2-b416-41ff-8ab1-281e4c63dcaf" 00:12:19.821 ], 00:12:19.821 "product_name": "Raid Volume", 00:12:19.821 "block_size": 512, 00:12:19.821 "num_blocks": 63488, 00:12:19.821 "uuid": "e68eafd2-b416-41ff-8ab1-281e4c63dcaf", 00:12:19.821 "assigned_rate_limits": { 00:12:19.821 "rw_ios_per_sec": 0, 00:12:19.821 "rw_mbytes_per_sec": 0, 00:12:19.821 "r_mbytes_per_sec": 0, 00:12:19.821 "w_mbytes_per_sec": 0 00:12:19.821 }, 00:12:19.821 "claimed": false, 00:12:19.821 "zoned": false, 00:12:19.821 "supported_io_types": { 00:12:19.821 "read": true, 00:12:19.821 "write": true, 00:12:19.821 "unmap": false, 00:12:19.821 "flush": false, 00:12:19.821 "reset": true, 00:12:19.821 "nvme_admin": false, 00:12:19.821 "nvme_io": false, 00:12:19.821 "nvme_io_md": false, 00:12:19.821 "write_zeroes": true, 00:12:19.821 "zcopy": false, 00:12:19.821 "get_zone_info": false, 00:12:19.821 "zone_management": false, 00:12:19.821 "zone_append": false, 00:12:19.821 "compare": false, 00:12:19.821 "compare_and_write": false, 00:12:19.821 "abort": false, 00:12:19.821 "seek_hole": false, 00:12:19.821 "seek_data": false, 00:12:19.821 "copy": false, 00:12:19.821 "nvme_iov_md": false 00:12:19.821 }, 00:12:19.821 "memory_domains": [ 00:12:19.821 { 00:12:19.821 "dma_device_id": "system", 00:12:19.821 "dma_device_type": 1 00:12:19.821 }, 00:12:19.821 { 00:12:19.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.821 "dma_device_type": 2 00:12:19.821 }, 00:12:19.821 { 00:12:19.821 "dma_device_id": "system", 00:12:19.821 "dma_device_type": 1 00:12:19.821 }, 00:12:19.821 { 00:12:19.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.821 "dma_device_type": 2 00:12:19.821 }, 00:12:19.821 { 00:12:19.821 "dma_device_id": "system", 00:12:19.821 "dma_device_type": 1 00:12:19.821 }, 00:12:19.821 { 00:12:19.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.821 "dma_device_type": 2 00:12:19.821 }, 00:12:19.821 { 00:12:19.821 "dma_device_id": "system", 00:12:19.821 "dma_device_type": 1 00:12:19.821 }, 00:12:19.821 { 00:12:19.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.821 "dma_device_type": 2 00:12:19.821 } 00:12:19.821 ], 00:12:19.821 "driver_specific": { 00:12:19.821 "raid": { 00:12:19.821 "uuid": "e68eafd2-b416-41ff-8ab1-281e4c63dcaf", 00:12:19.821 "strip_size_kb": 0, 00:12:19.821 "state": "online", 00:12:19.821 "raid_level": "raid1", 00:12:19.821 "superblock": true, 00:12:19.821 "num_base_bdevs": 4, 00:12:19.821 "num_base_bdevs_discovered": 4, 00:12:19.821 "num_base_bdevs_operational": 4, 00:12:19.821 "base_bdevs_list": [ 00:12:19.821 { 00:12:19.821 "name": "NewBaseBdev", 00:12:19.821 "uuid": "81ec16a1-0d03-4106-a6ad-ca863792d616", 00:12:19.821 "is_configured": true, 00:12:19.821 "data_offset": 2048, 00:12:19.821 "data_size": 63488 00:12:19.821 }, 00:12:19.821 { 00:12:19.821 "name": "BaseBdev2", 00:12:19.821 "uuid": "8582cc28-708a-4594-b61d-ffb648c3d4d2", 00:12:19.821 "is_configured": true, 00:12:19.821 "data_offset": 2048, 00:12:19.821 "data_size": 63488 00:12:19.821 }, 00:12:19.821 { 00:12:19.821 "name": "BaseBdev3", 00:12:19.821 "uuid": "64281776-661f-4185-8378-16d98abd7b0f", 00:12:19.821 "is_configured": true, 00:12:19.821 "data_offset": 2048, 00:12:19.821 "data_size": 63488 00:12:19.821 }, 00:12:19.821 { 00:12:19.821 "name": "BaseBdev4", 00:12:19.821 "uuid": "488a900b-fe31-458a-b152-6b97b714c6c5", 00:12:19.821 "is_configured": true, 00:12:19.821 "data_offset": 2048, 00:12:19.821 "data_size": 63488 00:12:19.821 } 00:12:19.821 ] 00:12:19.821 } 00:12:19.821 } 00:12:19.821 }' 00:12:19.821 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:20.081 BaseBdev2 00:12:20.081 BaseBdev3 00:12:20.081 BaseBdev4' 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.081 [2024-11-25 15:39:18.707799] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:20.081 [2024-11-25 15:39:18.707826] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:20.081 [2024-11-25 15:39:18.707896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.081 [2024-11-25 15:39:18.708193] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.081 [2024-11-25 15:39:18.708208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73581 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73581 ']' 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73581 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73581 00:12:20.081 killing process with pid 73581 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73581' 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73581 00:12:20.081 [2024-11-25 15:39:18.749345] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:20.081 15:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73581 00:12:20.649 [2024-11-25 15:39:19.125662] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:21.583 15:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:21.583 00:12:21.583 real 0m11.312s 00:12:21.583 user 0m17.969s 00:12:21.583 sys 0m1.993s 00:12:21.583 15:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.583 15:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.583 ************************************ 00:12:21.583 END TEST raid_state_function_test_sb 00:12:21.583 ************************************ 00:12:21.583 15:39:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:21.583 15:39:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:21.583 15:39:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.583 15:39:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:21.583 ************************************ 00:12:21.583 START TEST raid_superblock_test 00:12:21.583 ************************************ 00:12:21.583 15:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74246 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74246 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74246 ']' 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.842 15:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.842 [2024-11-25 15:39:20.350892] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:12:21.842 [2024-11-25 15:39:20.351040] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74246 ] 00:12:21.842 [2024-11-25 15:39:20.504400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.100 [2024-11-25 15:39:20.613461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.359 [2024-11-25 15:39:20.805454] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.359 [2024-11-25 15:39:20.805599] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.618 malloc1 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.618 [2024-11-25 15:39:21.226862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:22.618 [2024-11-25 15:39:21.226995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.618 [2024-11-25 15:39:21.227051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:22.618 [2024-11-25 15:39:21.227093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.618 [2024-11-25 15:39:21.229163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.618 [2024-11-25 15:39:21.229229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:22.618 pt1 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.618 malloc2 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.618 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.618 [2024-11-25 15:39:21.285343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:22.619 [2024-11-25 15:39:21.285432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.619 [2024-11-25 15:39:21.285486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:22.619 [2024-11-25 15:39:21.285496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.619 [2024-11-25 15:39:21.287490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.619 [2024-11-25 15:39:21.287568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:22.619 pt2 00:12:22.619 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.619 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:22.619 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:22.619 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:22.619 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:22.619 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:22.619 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:22.619 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:22.619 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:22.619 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:22.619 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.619 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.880 malloc3 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.880 [2024-11-25 15:39:21.349528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:22.880 [2024-11-25 15:39:21.349627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.880 [2024-11-25 15:39:21.349683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:22.880 [2024-11-25 15:39:21.349711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.880 [2024-11-25 15:39:21.351791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.880 [2024-11-25 15:39:21.351880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:22.880 pt3 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.880 malloc4 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.880 [2024-11-25 15:39:21.406488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:22.880 [2024-11-25 15:39:21.406578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.880 [2024-11-25 15:39:21.406612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:22.880 [2024-11-25 15:39:21.406640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.880 [2024-11-25 15:39:21.408664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.880 [2024-11-25 15:39:21.408737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:22.880 pt4 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:22.880 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.881 [2024-11-25 15:39:21.418529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:22.881 [2024-11-25 15:39:21.420771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:22.881 [2024-11-25 15:39:21.420880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:22.881 [2024-11-25 15:39:21.420925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:22.881 [2024-11-25 15:39:21.421151] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:22.881 [2024-11-25 15:39:21.421171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:22.881 [2024-11-25 15:39:21.421439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:22.881 [2024-11-25 15:39:21.421612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:22.881 [2024-11-25 15:39:21.421628] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:22.881 [2024-11-25 15:39:21.421777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.881 "name": "raid_bdev1", 00:12:22.881 "uuid": "40fdffeb-a74a-4596-85f9-4f6da9734c96", 00:12:22.881 "strip_size_kb": 0, 00:12:22.881 "state": "online", 00:12:22.881 "raid_level": "raid1", 00:12:22.881 "superblock": true, 00:12:22.881 "num_base_bdevs": 4, 00:12:22.881 "num_base_bdevs_discovered": 4, 00:12:22.881 "num_base_bdevs_operational": 4, 00:12:22.881 "base_bdevs_list": [ 00:12:22.881 { 00:12:22.881 "name": "pt1", 00:12:22.881 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.881 "is_configured": true, 00:12:22.881 "data_offset": 2048, 00:12:22.881 "data_size": 63488 00:12:22.881 }, 00:12:22.881 { 00:12:22.881 "name": "pt2", 00:12:22.881 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.881 "is_configured": true, 00:12:22.881 "data_offset": 2048, 00:12:22.881 "data_size": 63488 00:12:22.881 }, 00:12:22.881 { 00:12:22.881 "name": "pt3", 00:12:22.881 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.881 "is_configured": true, 00:12:22.881 "data_offset": 2048, 00:12:22.881 "data_size": 63488 00:12:22.881 }, 00:12:22.881 { 00:12:22.881 "name": "pt4", 00:12:22.881 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:22.881 "is_configured": true, 00:12:22.881 "data_offset": 2048, 00:12:22.881 "data_size": 63488 00:12:22.881 } 00:12:22.881 ] 00:12:22.881 }' 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.881 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.449 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:23.449 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:23.449 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:23.449 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:23.449 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:23.449 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:23.449 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:23.449 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.449 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.449 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:23.449 [2024-11-25 15:39:21.901965] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.449 15:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.449 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:23.449 "name": "raid_bdev1", 00:12:23.449 "aliases": [ 00:12:23.449 "40fdffeb-a74a-4596-85f9-4f6da9734c96" 00:12:23.449 ], 00:12:23.449 "product_name": "Raid Volume", 00:12:23.449 "block_size": 512, 00:12:23.449 "num_blocks": 63488, 00:12:23.449 "uuid": "40fdffeb-a74a-4596-85f9-4f6da9734c96", 00:12:23.449 "assigned_rate_limits": { 00:12:23.449 "rw_ios_per_sec": 0, 00:12:23.449 "rw_mbytes_per_sec": 0, 00:12:23.449 "r_mbytes_per_sec": 0, 00:12:23.449 "w_mbytes_per_sec": 0 00:12:23.449 }, 00:12:23.449 "claimed": false, 00:12:23.449 "zoned": false, 00:12:23.449 "supported_io_types": { 00:12:23.449 "read": true, 00:12:23.449 "write": true, 00:12:23.449 "unmap": false, 00:12:23.449 "flush": false, 00:12:23.449 "reset": true, 00:12:23.449 "nvme_admin": false, 00:12:23.449 "nvme_io": false, 00:12:23.449 "nvme_io_md": false, 00:12:23.449 "write_zeroes": true, 00:12:23.449 "zcopy": false, 00:12:23.449 "get_zone_info": false, 00:12:23.449 "zone_management": false, 00:12:23.449 "zone_append": false, 00:12:23.449 "compare": false, 00:12:23.449 "compare_and_write": false, 00:12:23.449 "abort": false, 00:12:23.449 "seek_hole": false, 00:12:23.449 "seek_data": false, 00:12:23.449 "copy": false, 00:12:23.449 "nvme_iov_md": false 00:12:23.449 }, 00:12:23.449 "memory_domains": [ 00:12:23.449 { 00:12:23.449 "dma_device_id": "system", 00:12:23.449 "dma_device_type": 1 00:12:23.449 }, 00:12:23.449 { 00:12:23.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.449 "dma_device_type": 2 00:12:23.449 }, 00:12:23.449 { 00:12:23.449 "dma_device_id": "system", 00:12:23.449 "dma_device_type": 1 00:12:23.449 }, 00:12:23.449 { 00:12:23.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.449 "dma_device_type": 2 00:12:23.449 }, 00:12:23.449 { 00:12:23.449 "dma_device_id": "system", 00:12:23.449 "dma_device_type": 1 00:12:23.449 }, 00:12:23.449 { 00:12:23.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.449 "dma_device_type": 2 00:12:23.449 }, 00:12:23.449 { 00:12:23.449 "dma_device_id": "system", 00:12:23.449 "dma_device_type": 1 00:12:23.449 }, 00:12:23.449 { 00:12:23.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.449 "dma_device_type": 2 00:12:23.449 } 00:12:23.449 ], 00:12:23.449 "driver_specific": { 00:12:23.449 "raid": { 00:12:23.449 "uuid": "40fdffeb-a74a-4596-85f9-4f6da9734c96", 00:12:23.449 "strip_size_kb": 0, 00:12:23.449 "state": "online", 00:12:23.449 "raid_level": "raid1", 00:12:23.449 "superblock": true, 00:12:23.449 "num_base_bdevs": 4, 00:12:23.449 "num_base_bdevs_discovered": 4, 00:12:23.449 "num_base_bdevs_operational": 4, 00:12:23.449 "base_bdevs_list": [ 00:12:23.449 { 00:12:23.449 "name": "pt1", 00:12:23.449 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:23.449 "is_configured": true, 00:12:23.449 "data_offset": 2048, 00:12:23.449 "data_size": 63488 00:12:23.449 }, 00:12:23.449 { 00:12:23.449 "name": "pt2", 00:12:23.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.449 "is_configured": true, 00:12:23.449 "data_offset": 2048, 00:12:23.449 "data_size": 63488 00:12:23.449 }, 00:12:23.449 { 00:12:23.449 "name": "pt3", 00:12:23.449 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:23.449 "is_configured": true, 00:12:23.449 "data_offset": 2048, 00:12:23.449 "data_size": 63488 00:12:23.449 }, 00:12:23.449 { 00:12:23.449 "name": "pt4", 00:12:23.449 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:23.449 "is_configured": true, 00:12:23.449 "data_offset": 2048, 00:12:23.449 "data_size": 63488 00:12:23.449 } 00:12:23.449 ] 00:12:23.449 } 00:12:23.449 } 00:12:23.449 }' 00:12:23.449 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:23.449 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:23.449 pt2 00:12:23.449 pt3 00:12:23.449 pt4' 00:12:23.449 15:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.449 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.707 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.707 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.707 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.707 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.707 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:23.707 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.707 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.707 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.707 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.707 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.707 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.707 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:23.707 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:23.707 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.707 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.707 [2024-11-25 15:39:22.201385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=40fdffeb-a74a-4596-85f9-4f6da9734c96 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 40fdffeb-a74a-4596-85f9-4f6da9734c96 ']' 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.708 [2024-11-25 15:39:22.249036] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.708 [2024-11-25 15:39:22.249099] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.708 [2024-11-25 15:39:22.249214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.708 [2024-11-25 15:39:22.249316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.708 [2024-11-25 15:39:22.249365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.708 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.967 [2024-11-25 15:39:22.412801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:23.967 [2024-11-25 15:39:22.414615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:23.967 [2024-11-25 15:39:22.414667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:23.967 [2024-11-25 15:39:22.414709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:23.967 [2024-11-25 15:39:22.414757] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:23.967 [2024-11-25 15:39:22.414813] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:23.967 [2024-11-25 15:39:22.414831] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:23.967 [2024-11-25 15:39:22.414848] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:23.967 [2024-11-25 15:39:22.414860] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.967 [2024-11-25 15:39:22.414870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:23.967 request: 00:12:23.967 { 00:12:23.967 "name": "raid_bdev1", 00:12:23.967 "raid_level": "raid1", 00:12:23.967 "base_bdevs": [ 00:12:23.967 "malloc1", 00:12:23.967 "malloc2", 00:12:23.967 "malloc3", 00:12:23.967 "malloc4" 00:12:23.967 ], 00:12:23.967 "superblock": false, 00:12:23.967 "method": "bdev_raid_create", 00:12:23.967 "req_id": 1 00:12:23.967 } 00:12:23.967 Got JSON-RPC error response 00:12:23.967 response: 00:12:23.967 { 00:12:23.967 "code": -17, 00:12:23.967 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:23.967 } 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:23.967 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.968 [2024-11-25 15:39:22.472654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:23.968 [2024-11-25 15:39:22.472714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.968 [2024-11-25 15:39:22.472732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:23.968 [2024-11-25 15:39:22.472742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.968 [2024-11-25 15:39:22.474903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.968 [2024-11-25 15:39:22.474948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:23.968 [2024-11-25 15:39:22.475037] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:23.968 [2024-11-25 15:39:22.475096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:23.968 pt1 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.968 "name": "raid_bdev1", 00:12:23.968 "uuid": "40fdffeb-a74a-4596-85f9-4f6da9734c96", 00:12:23.968 "strip_size_kb": 0, 00:12:23.968 "state": "configuring", 00:12:23.968 "raid_level": "raid1", 00:12:23.968 "superblock": true, 00:12:23.968 "num_base_bdevs": 4, 00:12:23.968 "num_base_bdevs_discovered": 1, 00:12:23.968 "num_base_bdevs_operational": 4, 00:12:23.968 "base_bdevs_list": [ 00:12:23.968 { 00:12:23.968 "name": "pt1", 00:12:23.968 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:23.968 "is_configured": true, 00:12:23.968 "data_offset": 2048, 00:12:23.968 "data_size": 63488 00:12:23.968 }, 00:12:23.968 { 00:12:23.968 "name": null, 00:12:23.968 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.968 "is_configured": false, 00:12:23.968 "data_offset": 2048, 00:12:23.968 "data_size": 63488 00:12:23.968 }, 00:12:23.968 { 00:12:23.968 "name": null, 00:12:23.968 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:23.968 "is_configured": false, 00:12:23.968 "data_offset": 2048, 00:12:23.968 "data_size": 63488 00:12:23.968 }, 00:12:23.968 { 00:12:23.968 "name": null, 00:12:23.968 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:23.968 "is_configured": false, 00:12:23.968 "data_offset": 2048, 00:12:23.968 "data_size": 63488 00:12:23.968 } 00:12:23.968 ] 00:12:23.968 }' 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.968 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.226 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:24.226 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:24.227 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.227 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.227 [2024-11-25 15:39:22.895992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:24.227 [2024-11-25 15:39:22.896152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.227 [2024-11-25 15:39:22.896193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:24.227 [2024-11-25 15:39:22.896225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.227 [2024-11-25 15:39:22.896692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.227 [2024-11-25 15:39:22.896752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:24.227 [2024-11-25 15:39:22.896863] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:24.227 [2024-11-25 15:39:22.896925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:24.227 pt2 00:12:24.227 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.227 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:24.227 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.227 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.486 [2024-11-25 15:39:22.907962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:24.486 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.486 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:24.486 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.486 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.486 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.486 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.486 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.486 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.486 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.486 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.486 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.486 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.486 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.486 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.486 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.486 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.486 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.486 "name": "raid_bdev1", 00:12:24.486 "uuid": "40fdffeb-a74a-4596-85f9-4f6da9734c96", 00:12:24.486 "strip_size_kb": 0, 00:12:24.486 "state": "configuring", 00:12:24.486 "raid_level": "raid1", 00:12:24.486 "superblock": true, 00:12:24.486 "num_base_bdevs": 4, 00:12:24.486 "num_base_bdevs_discovered": 1, 00:12:24.486 "num_base_bdevs_operational": 4, 00:12:24.486 "base_bdevs_list": [ 00:12:24.486 { 00:12:24.486 "name": "pt1", 00:12:24.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:24.486 "is_configured": true, 00:12:24.486 "data_offset": 2048, 00:12:24.486 "data_size": 63488 00:12:24.486 }, 00:12:24.486 { 00:12:24.486 "name": null, 00:12:24.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.486 "is_configured": false, 00:12:24.486 "data_offset": 0, 00:12:24.486 "data_size": 63488 00:12:24.486 }, 00:12:24.486 { 00:12:24.486 "name": null, 00:12:24.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.486 "is_configured": false, 00:12:24.486 "data_offset": 2048, 00:12:24.486 "data_size": 63488 00:12:24.486 }, 00:12:24.486 { 00:12:24.486 "name": null, 00:12:24.486 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:24.486 "is_configured": false, 00:12:24.486 "data_offset": 2048, 00:12:24.486 "data_size": 63488 00:12:24.486 } 00:12:24.486 ] 00:12:24.486 }' 00:12:24.486 15:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.486 15:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.745 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:24.745 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:24.745 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:24.745 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.745 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.745 [2024-11-25 15:39:23.303284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:24.745 [2024-11-25 15:39:23.303357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.745 [2024-11-25 15:39:23.303384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:24.745 [2024-11-25 15:39:23.303395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.745 [2024-11-25 15:39:23.303835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.745 [2024-11-25 15:39:23.303852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:24.745 [2024-11-25 15:39:23.303937] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:24.745 [2024-11-25 15:39:23.303958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:24.745 pt2 00:12:24.745 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.745 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:24.745 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:24.745 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:24.745 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.745 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.745 [2024-11-25 15:39:23.315229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:24.745 [2024-11-25 15:39:23.315321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.745 [2024-11-25 15:39:23.315349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:24.745 [2024-11-25 15:39:23.315356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.745 [2024-11-25 15:39:23.315752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.745 [2024-11-25 15:39:23.315769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:24.745 [2024-11-25 15:39:23.315837] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:24.745 [2024-11-25 15:39:23.315855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:24.745 pt3 00:12:24.745 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.745 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:24.745 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.746 [2024-11-25 15:39:23.327185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:24.746 [2024-11-25 15:39:23.327227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.746 [2024-11-25 15:39:23.327242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:24.746 [2024-11-25 15:39:23.327249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.746 [2024-11-25 15:39:23.327608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.746 [2024-11-25 15:39:23.327623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:24.746 [2024-11-25 15:39:23.327676] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:24.746 [2024-11-25 15:39:23.327692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:24.746 [2024-11-25 15:39:23.327835] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:24.746 [2024-11-25 15:39:23.327843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:24.746 [2024-11-25 15:39:23.328088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:24.746 [2024-11-25 15:39:23.328243] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:24.746 [2024-11-25 15:39:23.328261] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:24.746 [2024-11-25 15:39:23.328401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.746 pt4 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.746 "name": "raid_bdev1", 00:12:24.746 "uuid": "40fdffeb-a74a-4596-85f9-4f6da9734c96", 00:12:24.746 "strip_size_kb": 0, 00:12:24.746 "state": "online", 00:12:24.746 "raid_level": "raid1", 00:12:24.746 "superblock": true, 00:12:24.746 "num_base_bdevs": 4, 00:12:24.746 "num_base_bdevs_discovered": 4, 00:12:24.746 "num_base_bdevs_operational": 4, 00:12:24.746 "base_bdevs_list": [ 00:12:24.746 { 00:12:24.746 "name": "pt1", 00:12:24.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:24.746 "is_configured": true, 00:12:24.746 "data_offset": 2048, 00:12:24.746 "data_size": 63488 00:12:24.746 }, 00:12:24.746 { 00:12:24.746 "name": "pt2", 00:12:24.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.746 "is_configured": true, 00:12:24.746 "data_offset": 2048, 00:12:24.746 "data_size": 63488 00:12:24.746 }, 00:12:24.746 { 00:12:24.746 "name": "pt3", 00:12:24.746 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.746 "is_configured": true, 00:12:24.746 "data_offset": 2048, 00:12:24.746 "data_size": 63488 00:12:24.746 }, 00:12:24.746 { 00:12:24.746 "name": "pt4", 00:12:24.746 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:24.746 "is_configured": true, 00:12:24.746 "data_offset": 2048, 00:12:24.746 "data_size": 63488 00:12:24.746 } 00:12:24.746 ] 00:12:24.746 }' 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.746 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.314 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:25.314 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:25.314 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:25.314 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:25.314 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:25.314 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:25.314 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:25.314 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.314 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.314 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.314 [2024-11-25 15:39:23.718889] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.314 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.314 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:25.314 "name": "raid_bdev1", 00:12:25.314 "aliases": [ 00:12:25.314 "40fdffeb-a74a-4596-85f9-4f6da9734c96" 00:12:25.314 ], 00:12:25.314 "product_name": "Raid Volume", 00:12:25.314 "block_size": 512, 00:12:25.314 "num_blocks": 63488, 00:12:25.314 "uuid": "40fdffeb-a74a-4596-85f9-4f6da9734c96", 00:12:25.314 "assigned_rate_limits": { 00:12:25.314 "rw_ios_per_sec": 0, 00:12:25.314 "rw_mbytes_per_sec": 0, 00:12:25.314 "r_mbytes_per_sec": 0, 00:12:25.314 "w_mbytes_per_sec": 0 00:12:25.314 }, 00:12:25.314 "claimed": false, 00:12:25.314 "zoned": false, 00:12:25.314 "supported_io_types": { 00:12:25.314 "read": true, 00:12:25.314 "write": true, 00:12:25.314 "unmap": false, 00:12:25.314 "flush": false, 00:12:25.314 "reset": true, 00:12:25.314 "nvme_admin": false, 00:12:25.314 "nvme_io": false, 00:12:25.314 "nvme_io_md": false, 00:12:25.314 "write_zeroes": true, 00:12:25.314 "zcopy": false, 00:12:25.314 "get_zone_info": false, 00:12:25.314 "zone_management": false, 00:12:25.314 "zone_append": false, 00:12:25.314 "compare": false, 00:12:25.314 "compare_and_write": false, 00:12:25.314 "abort": false, 00:12:25.314 "seek_hole": false, 00:12:25.314 "seek_data": false, 00:12:25.314 "copy": false, 00:12:25.314 "nvme_iov_md": false 00:12:25.314 }, 00:12:25.314 "memory_domains": [ 00:12:25.314 { 00:12:25.314 "dma_device_id": "system", 00:12:25.314 "dma_device_type": 1 00:12:25.315 }, 00:12:25.315 { 00:12:25.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.315 "dma_device_type": 2 00:12:25.315 }, 00:12:25.315 { 00:12:25.315 "dma_device_id": "system", 00:12:25.315 "dma_device_type": 1 00:12:25.315 }, 00:12:25.315 { 00:12:25.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.315 "dma_device_type": 2 00:12:25.315 }, 00:12:25.315 { 00:12:25.315 "dma_device_id": "system", 00:12:25.315 "dma_device_type": 1 00:12:25.315 }, 00:12:25.315 { 00:12:25.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.315 "dma_device_type": 2 00:12:25.315 }, 00:12:25.315 { 00:12:25.315 "dma_device_id": "system", 00:12:25.315 "dma_device_type": 1 00:12:25.315 }, 00:12:25.315 { 00:12:25.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.315 "dma_device_type": 2 00:12:25.315 } 00:12:25.315 ], 00:12:25.315 "driver_specific": { 00:12:25.315 "raid": { 00:12:25.315 "uuid": "40fdffeb-a74a-4596-85f9-4f6da9734c96", 00:12:25.315 "strip_size_kb": 0, 00:12:25.315 "state": "online", 00:12:25.315 "raid_level": "raid1", 00:12:25.315 "superblock": true, 00:12:25.315 "num_base_bdevs": 4, 00:12:25.315 "num_base_bdevs_discovered": 4, 00:12:25.315 "num_base_bdevs_operational": 4, 00:12:25.315 "base_bdevs_list": [ 00:12:25.315 { 00:12:25.315 "name": "pt1", 00:12:25.315 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:25.315 "is_configured": true, 00:12:25.315 "data_offset": 2048, 00:12:25.315 "data_size": 63488 00:12:25.315 }, 00:12:25.315 { 00:12:25.315 "name": "pt2", 00:12:25.315 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.315 "is_configured": true, 00:12:25.315 "data_offset": 2048, 00:12:25.315 "data_size": 63488 00:12:25.315 }, 00:12:25.315 { 00:12:25.315 "name": "pt3", 00:12:25.315 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.315 "is_configured": true, 00:12:25.315 "data_offset": 2048, 00:12:25.315 "data_size": 63488 00:12:25.315 }, 00:12:25.315 { 00:12:25.315 "name": "pt4", 00:12:25.315 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:25.315 "is_configured": true, 00:12:25.315 "data_offset": 2048, 00:12:25.315 "data_size": 63488 00:12:25.315 } 00:12:25.315 ] 00:12:25.315 } 00:12:25.315 } 00:12:25.315 }' 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:25.315 pt2 00:12:25.315 pt3 00:12:25.315 pt4' 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.315 15:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:25.315 [2024-11-25 15:39:23.990345] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 40fdffeb-a74a-4596-85f9-4f6da9734c96 '!=' 40fdffeb-a74a-4596-85f9-4f6da9734c96 ']' 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.610 [2024-11-25 15:39:24.038036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.610 "name": "raid_bdev1", 00:12:25.610 "uuid": "40fdffeb-a74a-4596-85f9-4f6da9734c96", 00:12:25.610 "strip_size_kb": 0, 00:12:25.610 "state": "online", 00:12:25.610 "raid_level": "raid1", 00:12:25.610 "superblock": true, 00:12:25.610 "num_base_bdevs": 4, 00:12:25.610 "num_base_bdevs_discovered": 3, 00:12:25.610 "num_base_bdevs_operational": 3, 00:12:25.610 "base_bdevs_list": [ 00:12:25.610 { 00:12:25.610 "name": null, 00:12:25.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.610 "is_configured": false, 00:12:25.610 "data_offset": 0, 00:12:25.610 "data_size": 63488 00:12:25.610 }, 00:12:25.610 { 00:12:25.610 "name": "pt2", 00:12:25.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.610 "is_configured": true, 00:12:25.610 "data_offset": 2048, 00:12:25.610 "data_size": 63488 00:12:25.610 }, 00:12:25.610 { 00:12:25.610 "name": "pt3", 00:12:25.610 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.610 "is_configured": true, 00:12:25.610 "data_offset": 2048, 00:12:25.610 "data_size": 63488 00:12:25.610 }, 00:12:25.610 { 00:12:25.610 "name": "pt4", 00:12:25.610 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:25.610 "is_configured": true, 00:12:25.610 "data_offset": 2048, 00:12:25.610 "data_size": 63488 00:12:25.610 } 00:12:25.610 ] 00:12:25.610 }' 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.610 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.869 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:25.869 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.869 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.869 [2024-11-25 15:39:24.481206] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:25.869 [2024-11-25 15:39:24.481282] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:25.869 [2024-11-25 15:39:24.481373] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.869 [2024-11-25 15:39:24.481478] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:25.869 [2024-11-25 15:39:24.481525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:25.869 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.869 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.869 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.869 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.869 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:25.869 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.869 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:25.869 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:25.869 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:25.869 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:25.869 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:25.869 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.869 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.128 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.128 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:26.128 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:26.128 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:26.128 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.128 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.128 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.128 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:26.128 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:26.128 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:26.128 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.128 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.128 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.128 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:26.128 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.129 [2024-11-25 15:39:24.581045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:26.129 [2024-11-25 15:39:24.581090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.129 [2024-11-25 15:39:24.581108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:26.129 [2024-11-25 15:39:24.581116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.129 [2024-11-25 15:39:24.583233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.129 [2024-11-25 15:39:24.583270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:26.129 [2024-11-25 15:39:24.583344] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:26.129 [2024-11-25 15:39:24.583382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:26.129 pt2 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.129 "name": "raid_bdev1", 00:12:26.129 "uuid": "40fdffeb-a74a-4596-85f9-4f6da9734c96", 00:12:26.129 "strip_size_kb": 0, 00:12:26.129 "state": "configuring", 00:12:26.129 "raid_level": "raid1", 00:12:26.129 "superblock": true, 00:12:26.129 "num_base_bdevs": 4, 00:12:26.129 "num_base_bdevs_discovered": 1, 00:12:26.129 "num_base_bdevs_operational": 3, 00:12:26.129 "base_bdevs_list": [ 00:12:26.129 { 00:12:26.129 "name": null, 00:12:26.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.129 "is_configured": false, 00:12:26.129 "data_offset": 2048, 00:12:26.129 "data_size": 63488 00:12:26.129 }, 00:12:26.129 { 00:12:26.129 "name": "pt2", 00:12:26.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.129 "is_configured": true, 00:12:26.129 "data_offset": 2048, 00:12:26.129 "data_size": 63488 00:12:26.129 }, 00:12:26.129 { 00:12:26.129 "name": null, 00:12:26.129 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.129 "is_configured": false, 00:12:26.129 "data_offset": 2048, 00:12:26.129 "data_size": 63488 00:12:26.129 }, 00:12:26.129 { 00:12:26.129 "name": null, 00:12:26.129 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:26.129 "is_configured": false, 00:12:26.129 "data_offset": 2048, 00:12:26.129 "data_size": 63488 00:12:26.129 } 00:12:26.129 ] 00:12:26.129 }' 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.129 15:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.389 [2024-11-25 15:39:25.028329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:26.389 [2024-11-25 15:39:25.028444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.389 [2024-11-25 15:39:25.028485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:26.389 [2024-11-25 15:39:25.028513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.389 [2024-11-25 15:39:25.029004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.389 [2024-11-25 15:39:25.029081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:26.389 [2024-11-25 15:39:25.029206] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:26.389 [2024-11-25 15:39:25.029258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:26.389 pt3 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.389 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.648 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.648 "name": "raid_bdev1", 00:12:26.648 "uuid": "40fdffeb-a74a-4596-85f9-4f6da9734c96", 00:12:26.648 "strip_size_kb": 0, 00:12:26.648 "state": "configuring", 00:12:26.648 "raid_level": "raid1", 00:12:26.648 "superblock": true, 00:12:26.648 "num_base_bdevs": 4, 00:12:26.648 "num_base_bdevs_discovered": 2, 00:12:26.648 "num_base_bdevs_operational": 3, 00:12:26.648 "base_bdevs_list": [ 00:12:26.648 { 00:12:26.648 "name": null, 00:12:26.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.648 "is_configured": false, 00:12:26.648 "data_offset": 2048, 00:12:26.648 "data_size": 63488 00:12:26.648 }, 00:12:26.648 { 00:12:26.648 "name": "pt2", 00:12:26.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.648 "is_configured": true, 00:12:26.648 "data_offset": 2048, 00:12:26.648 "data_size": 63488 00:12:26.648 }, 00:12:26.648 { 00:12:26.648 "name": "pt3", 00:12:26.648 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.648 "is_configured": true, 00:12:26.648 "data_offset": 2048, 00:12:26.648 "data_size": 63488 00:12:26.648 }, 00:12:26.648 { 00:12:26.648 "name": null, 00:12:26.648 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:26.648 "is_configured": false, 00:12:26.648 "data_offset": 2048, 00:12:26.648 "data_size": 63488 00:12:26.648 } 00:12:26.648 ] 00:12:26.648 }' 00:12:26.648 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.648 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.907 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:26.907 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:26.907 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:26.907 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.908 [2024-11-25 15:39:25.379728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:26.908 [2024-11-25 15:39:25.379796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.908 [2024-11-25 15:39:25.379819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:26.908 [2024-11-25 15:39:25.379827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.908 [2024-11-25 15:39:25.380320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.908 [2024-11-25 15:39:25.380344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:26.908 [2024-11-25 15:39:25.380432] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:26.908 [2024-11-25 15:39:25.380460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:26.908 [2024-11-25 15:39:25.380604] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:26.908 [2024-11-25 15:39:25.380612] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:26.908 [2024-11-25 15:39:25.380863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:26.908 [2024-11-25 15:39:25.381013] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:26.908 [2024-11-25 15:39:25.381045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:26.908 [2024-11-25 15:39:25.381200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.908 pt4 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.908 "name": "raid_bdev1", 00:12:26.908 "uuid": "40fdffeb-a74a-4596-85f9-4f6da9734c96", 00:12:26.908 "strip_size_kb": 0, 00:12:26.908 "state": "online", 00:12:26.908 "raid_level": "raid1", 00:12:26.908 "superblock": true, 00:12:26.908 "num_base_bdevs": 4, 00:12:26.908 "num_base_bdevs_discovered": 3, 00:12:26.908 "num_base_bdevs_operational": 3, 00:12:26.908 "base_bdevs_list": [ 00:12:26.908 { 00:12:26.908 "name": null, 00:12:26.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.908 "is_configured": false, 00:12:26.908 "data_offset": 2048, 00:12:26.908 "data_size": 63488 00:12:26.908 }, 00:12:26.908 { 00:12:26.908 "name": "pt2", 00:12:26.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.908 "is_configured": true, 00:12:26.908 "data_offset": 2048, 00:12:26.908 "data_size": 63488 00:12:26.908 }, 00:12:26.908 { 00:12:26.908 "name": "pt3", 00:12:26.908 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.908 "is_configured": true, 00:12:26.908 "data_offset": 2048, 00:12:26.908 "data_size": 63488 00:12:26.908 }, 00:12:26.908 { 00:12:26.908 "name": "pt4", 00:12:26.908 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:26.908 "is_configured": true, 00:12:26.908 "data_offset": 2048, 00:12:26.908 "data_size": 63488 00:12:26.908 } 00:12:26.908 ] 00:12:26.908 }' 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.908 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.168 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:27.168 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.168 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.168 [2024-11-25 15:39:25.778992] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.168 [2024-11-25 15:39:25.779087] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.168 [2024-11-25 15:39:25.779190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.168 [2024-11-25 15:39:25.779298] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.168 [2024-11-25 15:39:25.779349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:27.168 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.168 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.168 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:27.168 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.168 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.168 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.168 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:27.168 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:27.168 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:27.168 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:27.168 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:27.168 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.168 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.168 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.427 [2024-11-25 15:39:25.854853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:27.427 [2024-11-25 15:39:25.854917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.427 [2024-11-25 15:39:25.854936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:27.427 [2024-11-25 15:39:25.854948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.427 [2024-11-25 15:39:25.857161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.427 [2024-11-25 15:39:25.857199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:27.427 [2024-11-25 15:39:25.857275] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:27.427 [2024-11-25 15:39:25.857317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:27.427 [2024-11-25 15:39:25.857427] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:27.427 [2024-11-25 15:39:25.857443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.427 [2024-11-25 15:39:25.857459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:27.427 [2024-11-25 15:39:25.857535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:27.427 [2024-11-25 15:39:25.857646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:27.427 pt1 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.427 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.427 "name": "raid_bdev1", 00:12:27.427 "uuid": "40fdffeb-a74a-4596-85f9-4f6da9734c96", 00:12:27.427 "strip_size_kb": 0, 00:12:27.427 "state": "configuring", 00:12:27.427 "raid_level": "raid1", 00:12:27.427 "superblock": true, 00:12:27.427 "num_base_bdevs": 4, 00:12:27.427 "num_base_bdevs_discovered": 2, 00:12:27.427 "num_base_bdevs_operational": 3, 00:12:27.427 "base_bdevs_list": [ 00:12:27.427 { 00:12:27.427 "name": null, 00:12:27.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.427 "is_configured": false, 00:12:27.427 "data_offset": 2048, 00:12:27.427 "data_size": 63488 00:12:27.427 }, 00:12:27.427 { 00:12:27.427 "name": "pt2", 00:12:27.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.428 "is_configured": true, 00:12:27.428 "data_offset": 2048, 00:12:27.428 "data_size": 63488 00:12:27.428 }, 00:12:27.428 { 00:12:27.428 "name": "pt3", 00:12:27.428 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.428 "is_configured": true, 00:12:27.428 "data_offset": 2048, 00:12:27.428 "data_size": 63488 00:12:27.428 }, 00:12:27.428 { 00:12:27.428 "name": null, 00:12:27.428 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:27.428 "is_configured": false, 00:12:27.428 "data_offset": 2048, 00:12:27.428 "data_size": 63488 00:12:27.428 } 00:12:27.428 ] 00:12:27.428 }' 00:12:27.428 15:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.428 15:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.687 [2024-11-25 15:39:26.330117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:27.687 [2024-11-25 15:39:26.330225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.687 [2024-11-25 15:39:26.330267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:27.687 [2024-11-25 15:39:26.330294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.687 [2024-11-25 15:39:26.330778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.687 [2024-11-25 15:39:26.330846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:27.687 [2024-11-25 15:39:26.330967] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:27.687 [2024-11-25 15:39:26.331054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:27.687 [2024-11-25 15:39:26.331248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:27.687 [2024-11-25 15:39:26.331288] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:27.687 [2024-11-25 15:39:26.331557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:27.687 [2024-11-25 15:39:26.331731] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:27.687 [2024-11-25 15:39:26.331773] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:27.687 [2024-11-25 15:39:26.331946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.687 pt4 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.687 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.947 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.947 "name": "raid_bdev1", 00:12:27.947 "uuid": "40fdffeb-a74a-4596-85f9-4f6da9734c96", 00:12:27.947 "strip_size_kb": 0, 00:12:27.947 "state": "online", 00:12:27.947 "raid_level": "raid1", 00:12:27.947 "superblock": true, 00:12:27.947 "num_base_bdevs": 4, 00:12:27.947 "num_base_bdevs_discovered": 3, 00:12:27.947 "num_base_bdevs_operational": 3, 00:12:27.947 "base_bdevs_list": [ 00:12:27.947 { 00:12:27.947 "name": null, 00:12:27.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.947 "is_configured": false, 00:12:27.947 "data_offset": 2048, 00:12:27.947 "data_size": 63488 00:12:27.947 }, 00:12:27.947 { 00:12:27.947 "name": "pt2", 00:12:27.947 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.947 "is_configured": true, 00:12:27.947 "data_offset": 2048, 00:12:27.947 "data_size": 63488 00:12:27.947 }, 00:12:27.947 { 00:12:27.947 "name": "pt3", 00:12:27.947 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.947 "is_configured": true, 00:12:27.947 "data_offset": 2048, 00:12:27.947 "data_size": 63488 00:12:27.947 }, 00:12:27.947 { 00:12:27.947 "name": "pt4", 00:12:27.947 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:27.947 "is_configured": true, 00:12:27.947 "data_offset": 2048, 00:12:27.947 "data_size": 63488 00:12:27.947 } 00:12:27.947 ] 00:12:27.947 }' 00:12:27.947 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.947 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.206 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:28.207 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:28.207 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.207 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.207 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.207 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:28.207 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:28.207 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:28.207 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.207 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.207 [2024-11-25 15:39:26.825521] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.207 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.207 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 40fdffeb-a74a-4596-85f9-4f6da9734c96 '!=' 40fdffeb-a74a-4596-85f9-4f6da9734c96 ']' 00:12:28.207 15:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74246 00:12:28.207 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74246 ']' 00:12:28.207 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74246 00:12:28.207 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:28.207 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.207 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74246 00:12:28.466 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:28.466 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:28.466 killing process with pid 74246 00:12:28.466 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74246' 00:12:28.466 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74246 00:12:28.466 [2024-11-25 15:39:26.892394] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:28.466 [2024-11-25 15:39:26.892510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.466 15:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74246 00:12:28.466 [2024-11-25 15:39:26.892588] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.466 [2024-11-25 15:39:26.892600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:28.725 [2024-11-25 15:39:27.281631] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:29.666 15:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:29.666 00:12:29.666 real 0m8.065s 00:12:29.666 user 0m12.676s 00:12:29.666 sys 0m1.362s 00:12:29.666 15:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.666 ************************************ 00:12:29.666 END TEST raid_superblock_test 00:12:29.666 ************************************ 00:12:29.666 15:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.926 15:39:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:29.926 15:39:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:29.926 15:39:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.926 15:39:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:29.926 ************************************ 00:12:29.926 START TEST raid_read_error_test 00:12:29.926 ************************************ 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lmFfju2ipS 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74733 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74733 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74733 ']' 00:12:29.926 15:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.927 15:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.927 15:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.927 15:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.927 15:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.927 [2024-11-25 15:39:28.492428] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:12:29.927 [2024-11-25 15:39:28.492648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74733 ] 00:12:30.186 [2024-11-25 15:39:28.653768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.186 [2024-11-25 15:39:28.764628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.446 [2024-11-25 15:39:28.957073] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.446 [2024-11-25 15:39:28.957191] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.714 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.714 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:30.714 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:30.714 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:30.714 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.714 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.714 BaseBdev1_malloc 00:12:30.974 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.974 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:30.974 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.974 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.974 true 00:12:30.974 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.974 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:30.974 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.974 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.974 [2024-11-25 15:39:29.411456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:30.974 [2024-11-25 15:39:29.411514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.974 [2024-11-25 15:39:29.411532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:30.974 [2024-11-25 15:39:29.411542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.974 [2024-11-25 15:39:29.413547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.974 [2024-11-25 15:39:29.413587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:30.974 BaseBdev1 00:12:30.974 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.974 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.975 BaseBdev2_malloc 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.975 true 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.975 [2024-11-25 15:39:29.478558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:30.975 [2024-11-25 15:39:29.478610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.975 [2024-11-25 15:39:29.478627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:30.975 [2024-11-25 15:39:29.478636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.975 [2024-11-25 15:39:29.480621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.975 [2024-11-25 15:39:29.480661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:30.975 BaseBdev2 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.975 BaseBdev3_malloc 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.975 true 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.975 [2024-11-25 15:39:29.554783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:30.975 [2024-11-25 15:39:29.554834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.975 [2024-11-25 15:39:29.554849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:30.975 [2024-11-25 15:39:29.554859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.975 [2024-11-25 15:39:29.556864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.975 [2024-11-25 15:39:29.556900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:30.975 BaseBdev3 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.975 BaseBdev4_malloc 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.975 true 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.975 [2024-11-25 15:39:29.618525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:30.975 [2024-11-25 15:39:29.618574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.975 [2024-11-25 15:39:29.618591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:30.975 [2024-11-25 15:39:29.618600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.975 [2024-11-25 15:39:29.620578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.975 [2024-11-25 15:39:29.620616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:30.975 BaseBdev4 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.975 [2024-11-25 15:39:29.630564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:30.975 [2024-11-25 15:39:29.632355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.975 [2024-11-25 15:39:29.632433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:30.975 [2024-11-25 15:39:29.632497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:30.975 [2024-11-25 15:39:29.632727] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:30.975 [2024-11-25 15:39:29.632741] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:30.975 [2024-11-25 15:39:29.632962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:30.975 [2024-11-25 15:39:29.633132] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:30.975 [2024-11-25 15:39:29.633142] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:30.975 [2024-11-25 15:39:29.633292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.975 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.236 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.236 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.236 "name": "raid_bdev1", 00:12:31.236 "uuid": "4e7b4160-b9bc-43d5-a687-28c227290820", 00:12:31.236 "strip_size_kb": 0, 00:12:31.236 "state": "online", 00:12:31.236 "raid_level": "raid1", 00:12:31.236 "superblock": true, 00:12:31.236 "num_base_bdevs": 4, 00:12:31.236 "num_base_bdevs_discovered": 4, 00:12:31.236 "num_base_bdevs_operational": 4, 00:12:31.236 "base_bdevs_list": [ 00:12:31.236 { 00:12:31.236 "name": "BaseBdev1", 00:12:31.236 "uuid": "97616c47-7446-51b4-b8bb-23ef6b34fc8c", 00:12:31.236 "is_configured": true, 00:12:31.236 "data_offset": 2048, 00:12:31.236 "data_size": 63488 00:12:31.236 }, 00:12:31.236 { 00:12:31.236 "name": "BaseBdev2", 00:12:31.236 "uuid": "889eb62f-352c-5f6c-a55e-ab8b11d82aa0", 00:12:31.236 "is_configured": true, 00:12:31.236 "data_offset": 2048, 00:12:31.236 "data_size": 63488 00:12:31.236 }, 00:12:31.236 { 00:12:31.236 "name": "BaseBdev3", 00:12:31.236 "uuid": "68f8837e-24c8-5e03-afdc-4560bd14e34a", 00:12:31.236 "is_configured": true, 00:12:31.236 "data_offset": 2048, 00:12:31.236 "data_size": 63488 00:12:31.236 }, 00:12:31.236 { 00:12:31.236 "name": "BaseBdev4", 00:12:31.236 "uuid": "2613bb7e-b2ba-54cc-bd90-0fd9abee292d", 00:12:31.236 "is_configured": true, 00:12:31.236 "data_offset": 2048, 00:12:31.236 "data_size": 63488 00:12:31.236 } 00:12:31.236 ] 00:12:31.236 }' 00:12:31.236 15:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.236 15:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.496 15:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:31.496 15:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:31.496 [2024-11-25 15:39:30.159098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:32.437 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:32.437 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.437 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.438 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.698 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.698 "name": "raid_bdev1", 00:12:32.698 "uuid": "4e7b4160-b9bc-43d5-a687-28c227290820", 00:12:32.698 "strip_size_kb": 0, 00:12:32.698 "state": "online", 00:12:32.698 "raid_level": "raid1", 00:12:32.698 "superblock": true, 00:12:32.698 "num_base_bdevs": 4, 00:12:32.698 "num_base_bdevs_discovered": 4, 00:12:32.698 "num_base_bdevs_operational": 4, 00:12:32.698 "base_bdevs_list": [ 00:12:32.698 { 00:12:32.698 "name": "BaseBdev1", 00:12:32.698 "uuid": "97616c47-7446-51b4-b8bb-23ef6b34fc8c", 00:12:32.698 "is_configured": true, 00:12:32.698 "data_offset": 2048, 00:12:32.698 "data_size": 63488 00:12:32.698 }, 00:12:32.698 { 00:12:32.698 "name": "BaseBdev2", 00:12:32.698 "uuid": "889eb62f-352c-5f6c-a55e-ab8b11d82aa0", 00:12:32.698 "is_configured": true, 00:12:32.698 "data_offset": 2048, 00:12:32.698 "data_size": 63488 00:12:32.698 }, 00:12:32.698 { 00:12:32.698 "name": "BaseBdev3", 00:12:32.698 "uuid": "68f8837e-24c8-5e03-afdc-4560bd14e34a", 00:12:32.698 "is_configured": true, 00:12:32.698 "data_offset": 2048, 00:12:32.698 "data_size": 63488 00:12:32.698 }, 00:12:32.698 { 00:12:32.698 "name": "BaseBdev4", 00:12:32.698 "uuid": "2613bb7e-b2ba-54cc-bd90-0fd9abee292d", 00:12:32.698 "is_configured": true, 00:12:32.698 "data_offset": 2048, 00:12:32.698 "data_size": 63488 00:12:32.698 } 00:12:32.698 ] 00:12:32.698 }' 00:12:32.698 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.698 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.957 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:32.957 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.957 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.957 [2024-11-25 15:39:31.534086] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.957 [2024-11-25 15:39:31.534194] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.957 [2024-11-25 15:39:31.536788] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.957 [2024-11-25 15:39:31.536886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.957 [2024-11-25 15:39:31.537030] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.957 [2024-11-25 15:39:31.537080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:32.957 { 00:12:32.957 "results": [ 00:12:32.957 { 00:12:32.957 "job": "raid_bdev1", 00:12:32.957 "core_mask": "0x1", 00:12:32.957 "workload": "randrw", 00:12:32.957 "percentage": 50, 00:12:32.957 "status": "finished", 00:12:32.957 "queue_depth": 1, 00:12:32.957 "io_size": 131072, 00:12:32.957 "runtime": 1.375822, 00:12:32.957 "iops": 10985.432708591663, 00:12:32.957 "mibps": 1373.1790885739579, 00:12:32.957 "io_failed": 0, 00:12:32.957 "io_timeout": 0, 00:12:32.957 "avg_latency_us": 88.53785073326272, 00:12:32.957 "min_latency_us": 21.910917030567685, 00:12:32.957 "max_latency_us": 1616.9362445414847 00:12:32.957 } 00:12:32.957 ], 00:12:32.957 "core_count": 1 00:12:32.957 } 00:12:32.957 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.957 15:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74733 00:12:32.957 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74733 ']' 00:12:32.957 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74733 00:12:32.958 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:32.958 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:32.958 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74733 00:12:32.958 killing process with pid 74733 00:12:32.958 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:32.958 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:32.958 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74733' 00:12:32.958 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74733 00:12:32.958 [2024-11-25 15:39:31.570599] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:32.958 15:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74733 00:12:33.217 [2024-11-25 15:39:31.877644] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:34.601 15:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lmFfju2ipS 00:12:34.601 15:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:34.601 15:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:34.601 15:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:34.601 15:39:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:34.601 ************************************ 00:12:34.601 END TEST raid_read_error_test 00:12:34.601 ************************************ 00:12:34.601 15:39:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:34.601 15:39:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:34.601 15:39:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:34.601 00:12:34.601 real 0m4.619s 00:12:34.601 user 0m5.474s 00:12:34.601 sys 0m0.564s 00:12:34.601 15:39:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.601 15:39:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.601 15:39:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:34.601 15:39:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:34.601 15:39:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.601 15:39:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:34.601 ************************************ 00:12:34.601 START TEST raid_write_error_test 00:12:34.601 ************************************ 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HXvnjooyfg 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74873 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74873 00:12:34.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74873 ']' 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:34.601 15:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.601 [2024-11-25 15:39:33.170400] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:12:34.601 [2024-11-25 15:39:33.170527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74873 ] 00:12:34.861 [2024-11-25 15:39:33.340958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.861 [2024-11-25 15:39:33.452249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.120 [2024-11-25 15:39:33.638535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.120 [2024-11-25 15:39:33.638621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.380 15:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.380 15:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:35.380 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.380 15:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:35.380 15:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.380 15:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.380 BaseBdev1_malloc 00:12:35.380 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.380 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:35.380 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.380 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.380 true 00:12:35.380 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.380 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:35.380 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.380 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.380 [2024-11-25 15:39:34.039245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:35.380 [2024-11-25 15:39:34.039300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.380 [2024-11-25 15:39:34.039319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:35.380 [2024-11-25 15:39:34.039330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.380 [2024-11-25 15:39:34.041312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.380 [2024-11-25 15:39:34.041350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:35.380 BaseBdev1 00:12:35.380 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.380 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.380 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:35.380 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.380 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.641 BaseBdev2_malloc 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.641 true 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.641 [2024-11-25 15:39:34.103718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:35.641 [2024-11-25 15:39:34.103771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.641 [2024-11-25 15:39:34.103786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:35.641 [2024-11-25 15:39:34.103796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.641 [2024-11-25 15:39:34.105758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.641 [2024-11-25 15:39:34.105794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:35.641 BaseBdev2 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.641 BaseBdev3_malloc 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.641 true 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.641 [2024-11-25 15:39:34.181471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:35.641 [2024-11-25 15:39:34.181525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.641 [2024-11-25 15:39:34.181542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:35.641 [2024-11-25 15:39:34.181552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.641 [2024-11-25 15:39:34.183592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.641 [2024-11-25 15:39:34.183632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:35.641 BaseBdev3 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.641 BaseBdev4_malloc 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.641 true 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.641 [2024-11-25 15:39:34.247593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:35.641 [2024-11-25 15:39:34.247706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.641 [2024-11-25 15:39:34.247727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:35.641 [2024-11-25 15:39:34.247739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.641 [2024-11-25 15:39:34.249741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.641 [2024-11-25 15:39:34.249783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:35.641 BaseBdev4 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.641 [2024-11-25 15:39:34.259631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.641 [2024-11-25 15:39:34.261419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:35.641 [2024-11-25 15:39:34.261498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:35.641 [2024-11-25 15:39:34.261561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:35.641 [2024-11-25 15:39:34.261786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:35.641 [2024-11-25 15:39:34.261801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:35.641 [2024-11-25 15:39:34.262049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:35.641 [2024-11-25 15:39:34.262273] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:35.641 [2024-11-25 15:39:34.262286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:35.641 [2024-11-25 15:39:34.262430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:35.641 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.642 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.642 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.642 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.642 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.642 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.642 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.642 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.642 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.642 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.642 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.642 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.642 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.642 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.642 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.642 "name": "raid_bdev1", 00:12:35.642 "uuid": "47607468-35a2-4bca-99c2-b5bdd144c936", 00:12:35.642 "strip_size_kb": 0, 00:12:35.642 "state": "online", 00:12:35.642 "raid_level": "raid1", 00:12:35.642 "superblock": true, 00:12:35.642 "num_base_bdevs": 4, 00:12:35.642 "num_base_bdevs_discovered": 4, 00:12:35.642 "num_base_bdevs_operational": 4, 00:12:35.642 "base_bdevs_list": [ 00:12:35.642 { 00:12:35.642 "name": "BaseBdev1", 00:12:35.642 "uuid": "8016f339-3a12-5a4e-bb96-0c26faacc7f3", 00:12:35.642 "is_configured": true, 00:12:35.642 "data_offset": 2048, 00:12:35.642 "data_size": 63488 00:12:35.642 }, 00:12:35.642 { 00:12:35.642 "name": "BaseBdev2", 00:12:35.642 "uuid": "a2ce230f-d6ff-5b71-87a4-7a1605d68896", 00:12:35.642 "is_configured": true, 00:12:35.642 "data_offset": 2048, 00:12:35.642 "data_size": 63488 00:12:35.642 }, 00:12:35.642 { 00:12:35.642 "name": "BaseBdev3", 00:12:35.642 "uuid": "421fd474-5df1-5a5d-b69b-73525ae4515f", 00:12:35.642 "is_configured": true, 00:12:35.642 "data_offset": 2048, 00:12:35.642 "data_size": 63488 00:12:35.642 }, 00:12:35.642 { 00:12:35.642 "name": "BaseBdev4", 00:12:35.642 "uuid": "5e4e4224-209e-5c88-adb0-c75091b2b920", 00:12:35.642 "is_configured": true, 00:12:35.642 "data_offset": 2048, 00:12:35.642 "data_size": 63488 00:12:35.642 } 00:12:35.642 ] 00:12:35.642 }' 00:12:35.642 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.642 15:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.212 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:36.212 15:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:36.212 [2024-11-25 15:39:34.823843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.153 [2024-11-25 15:39:35.694117] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:37.153 [2024-11-25 15:39:35.694172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:37.153 [2024-11-25 15:39:35.694420] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.153 "name": "raid_bdev1", 00:12:37.153 "uuid": "47607468-35a2-4bca-99c2-b5bdd144c936", 00:12:37.153 "strip_size_kb": 0, 00:12:37.153 "state": "online", 00:12:37.153 "raid_level": "raid1", 00:12:37.153 "superblock": true, 00:12:37.153 "num_base_bdevs": 4, 00:12:37.153 "num_base_bdevs_discovered": 3, 00:12:37.153 "num_base_bdevs_operational": 3, 00:12:37.153 "base_bdevs_list": [ 00:12:37.153 { 00:12:37.153 "name": null, 00:12:37.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.153 "is_configured": false, 00:12:37.153 "data_offset": 0, 00:12:37.153 "data_size": 63488 00:12:37.153 }, 00:12:37.153 { 00:12:37.153 "name": "BaseBdev2", 00:12:37.153 "uuid": "a2ce230f-d6ff-5b71-87a4-7a1605d68896", 00:12:37.153 "is_configured": true, 00:12:37.153 "data_offset": 2048, 00:12:37.153 "data_size": 63488 00:12:37.153 }, 00:12:37.153 { 00:12:37.153 "name": "BaseBdev3", 00:12:37.153 "uuid": "421fd474-5df1-5a5d-b69b-73525ae4515f", 00:12:37.153 "is_configured": true, 00:12:37.153 "data_offset": 2048, 00:12:37.153 "data_size": 63488 00:12:37.153 }, 00:12:37.153 { 00:12:37.153 "name": "BaseBdev4", 00:12:37.153 "uuid": "5e4e4224-209e-5c88-adb0-c75091b2b920", 00:12:37.153 "is_configured": true, 00:12:37.153 "data_offset": 2048, 00:12:37.153 "data_size": 63488 00:12:37.153 } 00:12:37.153 ] 00:12:37.153 }' 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.153 15:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.724 15:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:37.724 15:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.724 15:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.724 [2024-11-25 15:39:36.161935] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:37.724 [2024-11-25 15:39:36.162049] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:37.724 [2024-11-25 15:39:36.164641] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.724 [2024-11-25 15:39:36.164728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.724 [2024-11-25 15:39:36.164865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.724 [2024-11-25 15:39:36.164917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:37.724 { 00:12:37.724 "results": [ 00:12:37.724 { 00:12:37.724 "job": "raid_bdev1", 00:12:37.724 "core_mask": "0x1", 00:12:37.724 "workload": "randrw", 00:12:37.724 "percentage": 50, 00:12:37.724 "status": "finished", 00:12:37.724 "queue_depth": 1, 00:12:37.724 "io_size": 131072, 00:12:37.724 "runtime": 1.338989, 00:12:37.724 "iops": 11916.453383859016, 00:12:37.725 "mibps": 1489.556672982377, 00:12:37.725 "io_failed": 0, 00:12:37.725 "io_timeout": 0, 00:12:37.725 "avg_latency_us": 81.36410412477107, 00:12:37.725 "min_latency_us": 22.022707423580787, 00:12:37.725 "max_latency_us": 1466.6899563318777 00:12:37.725 } 00:12:37.725 ], 00:12:37.725 "core_count": 1 00:12:37.725 } 00:12:37.725 15:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.725 15:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74873 00:12:37.725 15:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74873 ']' 00:12:37.725 15:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74873 00:12:37.725 15:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:37.725 15:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.725 15:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74873 00:12:37.725 15:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.725 15:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.725 killing process with pid 74873 00:12:37.725 15:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74873' 00:12:37.725 15:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74873 00:12:37.725 [2024-11-25 15:39:36.211286] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:37.725 15:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74873 00:12:37.985 [2024-11-25 15:39:36.524633] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:39.366 15:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:39.366 15:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HXvnjooyfg 00:12:39.366 15:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:39.366 15:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:39.366 15:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:39.366 ************************************ 00:12:39.366 END TEST raid_write_error_test 00:12:39.366 ************************************ 00:12:39.366 15:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:39.366 15:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:39.366 15:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:39.366 00:12:39.366 real 0m4.573s 00:12:39.366 user 0m5.422s 00:12:39.366 sys 0m0.587s 00:12:39.366 15:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.366 15:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.366 15:39:37 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:39.366 15:39:37 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:39.366 15:39:37 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:39.366 15:39:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:39.366 15:39:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.366 15:39:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:39.366 ************************************ 00:12:39.366 START TEST raid_rebuild_test 00:12:39.366 ************************************ 00:12:39.366 15:39:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:39.366 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:39.366 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:39.366 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:39.366 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:39.366 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:39.366 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:39.366 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:39.366 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:39.366 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:39.366 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:39.366 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:39.366 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:39.366 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:39.366 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:39.366 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:39.366 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:39.367 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:39.367 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:39.367 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:39.367 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:39.367 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:39.367 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:39.367 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:39.367 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75017 00:12:39.367 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:39.367 15:39:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75017 00:12:39.367 15:39:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75017 ']' 00:12:39.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.367 15:39:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.367 15:39:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.367 15:39:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.367 15:39:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.367 15:39:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.367 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:39.367 Zero copy mechanism will not be used. 00:12:39.367 [2024-11-25 15:39:37.812666] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:12:39.367 [2024-11-25 15:39:37.812854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75017 ] 00:12:39.367 [2024-11-25 15:39:37.969303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.626 [2024-11-25 15:39:38.078918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.626 [2024-11-25 15:39:38.271016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.626 [2024-11-25 15:39:38.271156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.195 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.195 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:40.195 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:40.195 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:40.195 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.195 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.195 BaseBdev1_malloc 00:12:40.195 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.195 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:40.195 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.195 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.196 [2024-11-25 15:39:38.669673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:40.196 [2024-11-25 15:39:38.669781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.196 [2024-11-25 15:39:38.669825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:40.196 [2024-11-25 15:39:38.669856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.196 [2024-11-25 15:39:38.671972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.196 [2024-11-25 15:39:38.672055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:40.196 BaseBdev1 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.196 BaseBdev2_malloc 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.196 [2024-11-25 15:39:38.723166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:40.196 [2024-11-25 15:39:38.723269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.196 [2024-11-25 15:39:38.723292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:40.196 [2024-11-25 15:39:38.723302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.196 [2024-11-25 15:39:38.725369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.196 [2024-11-25 15:39:38.725420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:40.196 BaseBdev2 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.196 spare_malloc 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.196 spare_delay 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.196 [2024-11-25 15:39:38.800456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:40.196 [2024-11-25 15:39:38.800516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.196 [2024-11-25 15:39:38.800551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:40.196 [2024-11-25 15:39:38.800561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.196 [2024-11-25 15:39:38.802535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.196 [2024-11-25 15:39:38.802574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:40.196 spare 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.196 [2024-11-25 15:39:38.812487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.196 [2024-11-25 15:39:38.814131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:40.196 [2024-11-25 15:39:38.814207] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:40.196 [2024-11-25 15:39:38.814219] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:40.196 [2024-11-25 15:39:38.814441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:40.196 [2024-11-25 15:39:38.814592] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:40.196 [2024-11-25 15:39:38.814602] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:40.196 [2024-11-25 15:39:38.814733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.196 "name": "raid_bdev1", 00:12:40.196 "uuid": "cdb79f7f-17b5-425c-ad2b-f9ca3dacaa41", 00:12:40.196 "strip_size_kb": 0, 00:12:40.196 "state": "online", 00:12:40.196 "raid_level": "raid1", 00:12:40.196 "superblock": false, 00:12:40.196 "num_base_bdevs": 2, 00:12:40.196 "num_base_bdevs_discovered": 2, 00:12:40.196 "num_base_bdevs_operational": 2, 00:12:40.196 "base_bdevs_list": [ 00:12:40.196 { 00:12:40.196 "name": "BaseBdev1", 00:12:40.196 "uuid": "58620041-e784-56ad-bef3-79f6c465938c", 00:12:40.196 "is_configured": true, 00:12:40.196 "data_offset": 0, 00:12:40.196 "data_size": 65536 00:12:40.196 }, 00:12:40.196 { 00:12:40.196 "name": "BaseBdev2", 00:12:40.196 "uuid": "f4b2a2a4-33e3-5e2b-817c-44d484309040", 00:12:40.196 "is_configured": true, 00:12:40.196 "data_offset": 0, 00:12:40.196 "data_size": 65536 00:12:40.196 } 00:12:40.196 ] 00:12:40.196 }' 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.196 15:39:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.765 15:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:40.765 15:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:40.765 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.766 [2024-11-25 15:39:39.244054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:40.766 15:39:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:41.025 [2024-11-25 15:39:39.519318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:41.025 /dev/nbd0 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.025 1+0 records in 00:12:41.025 1+0 records out 00:12:41.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254344 s, 16.1 MB/s 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:41.025 15:39:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:45.255 65536+0 records in 00:12:45.255 65536+0 records out 00:12:45.255 33554432 bytes (34 MB, 32 MiB) copied, 3.67884 s, 9.1 MB/s 00:12:45.255 15:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:45.255 15:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:45.255 15:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:45.255 15:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:45.255 15:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:45.255 15:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.255 15:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:45.255 [2024-11-25 15:39:43.442025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.255 15:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:45.255 15:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:45.255 15:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:45.255 15:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.255 15:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.255 15:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:45.255 15:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:45.255 15:39:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.255 15:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:45.255 15:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.255 15:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.256 [2024-11-25 15:39:43.474468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.256 "name": "raid_bdev1", 00:12:45.256 "uuid": "cdb79f7f-17b5-425c-ad2b-f9ca3dacaa41", 00:12:45.256 "strip_size_kb": 0, 00:12:45.256 "state": "online", 00:12:45.256 "raid_level": "raid1", 00:12:45.256 "superblock": false, 00:12:45.256 "num_base_bdevs": 2, 00:12:45.256 "num_base_bdevs_discovered": 1, 00:12:45.256 "num_base_bdevs_operational": 1, 00:12:45.256 "base_bdevs_list": [ 00:12:45.256 { 00:12:45.256 "name": null, 00:12:45.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.256 "is_configured": false, 00:12:45.256 "data_offset": 0, 00:12:45.256 "data_size": 65536 00:12:45.256 }, 00:12:45.256 { 00:12:45.256 "name": "BaseBdev2", 00:12:45.256 "uuid": "f4b2a2a4-33e3-5e2b-817c-44d484309040", 00:12:45.256 "is_configured": true, 00:12:45.256 "data_offset": 0, 00:12:45.256 "data_size": 65536 00:12:45.256 } 00:12:45.256 ] 00:12:45.256 }' 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.256 15:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.256 [2024-11-25 15:39:43.925684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:45.516 [2024-11-25 15:39:43.942670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:45.516 15:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.516 15:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:45.516 [2024-11-25 15:39:43.944512] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:46.458 15:39:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.458 15:39:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.458 15:39:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.458 15:39:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.458 15:39:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.458 15:39:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.458 15:39:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.458 15:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.458 15:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.458 15:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.458 15:39:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.458 "name": "raid_bdev1", 00:12:46.458 "uuid": "cdb79f7f-17b5-425c-ad2b-f9ca3dacaa41", 00:12:46.458 "strip_size_kb": 0, 00:12:46.458 "state": "online", 00:12:46.458 "raid_level": "raid1", 00:12:46.458 "superblock": false, 00:12:46.458 "num_base_bdevs": 2, 00:12:46.458 "num_base_bdevs_discovered": 2, 00:12:46.458 "num_base_bdevs_operational": 2, 00:12:46.458 "process": { 00:12:46.458 "type": "rebuild", 00:12:46.458 "target": "spare", 00:12:46.458 "progress": { 00:12:46.458 "blocks": 20480, 00:12:46.458 "percent": 31 00:12:46.458 } 00:12:46.458 }, 00:12:46.458 "base_bdevs_list": [ 00:12:46.458 { 00:12:46.458 "name": "spare", 00:12:46.458 "uuid": "4417c360-26b1-5157-bc1f-6b9b85f74063", 00:12:46.458 "is_configured": true, 00:12:46.458 "data_offset": 0, 00:12:46.458 "data_size": 65536 00:12:46.458 }, 00:12:46.458 { 00:12:46.458 "name": "BaseBdev2", 00:12:46.458 "uuid": "f4b2a2a4-33e3-5e2b-817c-44d484309040", 00:12:46.458 "is_configured": true, 00:12:46.458 "data_offset": 0, 00:12:46.458 "data_size": 65536 00:12:46.458 } 00:12:46.458 ] 00:12:46.458 }' 00:12:46.458 15:39:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.458 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.458 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.458 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.458 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:46.458 15:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.458 15:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.458 [2024-11-25 15:39:45.083929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.718 [2024-11-25 15:39:45.149447] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:46.718 [2024-11-25 15:39:45.149504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.718 [2024-11-25 15:39:45.149519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.718 [2024-11-25 15:39:45.149528] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:46.718 15:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.718 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:46.718 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.718 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.718 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.718 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.718 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:46.718 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.718 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.718 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.718 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.718 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.718 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.718 15:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.718 15:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.718 15:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.718 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.718 "name": "raid_bdev1", 00:12:46.718 "uuid": "cdb79f7f-17b5-425c-ad2b-f9ca3dacaa41", 00:12:46.718 "strip_size_kb": 0, 00:12:46.718 "state": "online", 00:12:46.718 "raid_level": "raid1", 00:12:46.718 "superblock": false, 00:12:46.718 "num_base_bdevs": 2, 00:12:46.718 "num_base_bdevs_discovered": 1, 00:12:46.718 "num_base_bdevs_operational": 1, 00:12:46.718 "base_bdevs_list": [ 00:12:46.718 { 00:12:46.718 "name": null, 00:12:46.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.718 "is_configured": false, 00:12:46.718 "data_offset": 0, 00:12:46.718 "data_size": 65536 00:12:46.718 }, 00:12:46.718 { 00:12:46.718 "name": "BaseBdev2", 00:12:46.718 "uuid": "f4b2a2a4-33e3-5e2b-817c-44d484309040", 00:12:46.718 "is_configured": true, 00:12:46.718 "data_offset": 0, 00:12:46.718 "data_size": 65536 00:12:46.718 } 00:12:46.718 ] 00:12:46.718 }' 00:12:46.718 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.718 15:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.979 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:46.979 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.979 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:46.979 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:46.979 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.979 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.979 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.979 15:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.979 15:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.979 15:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.979 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.979 "name": "raid_bdev1", 00:12:46.979 "uuid": "cdb79f7f-17b5-425c-ad2b-f9ca3dacaa41", 00:12:46.979 "strip_size_kb": 0, 00:12:46.979 "state": "online", 00:12:46.979 "raid_level": "raid1", 00:12:46.979 "superblock": false, 00:12:46.979 "num_base_bdevs": 2, 00:12:46.979 "num_base_bdevs_discovered": 1, 00:12:46.979 "num_base_bdevs_operational": 1, 00:12:46.979 "base_bdevs_list": [ 00:12:46.979 { 00:12:46.979 "name": null, 00:12:46.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.979 "is_configured": false, 00:12:46.979 "data_offset": 0, 00:12:46.979 "data_size": 65536 00:12:46.979 }, 00:12:46.979 { 00:12:46.979 "name": "BaseBdev2", 00:12:46.979 "uuid": "f4b2a2a4-33e3-5e2b-817c-44d484309040", 00:12:46.979 "is_configured": true, 00:12:46.979 "data_offset": 0, 00:12:46.979 "data_size": 65536 00:12:46.979 } 00:12:46.979 ] 00:12:46.979 }' 00:12:46.979 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.240 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:47.240 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.240 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:47.240 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:47.240 15:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.240 15:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.240 [2024-11-25 15:39:45.758164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:47.240 [2024-11-25 15:39:45.773562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:47.240 15:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.240 15:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:47.240 [2024-11-25 15:39:45.775392] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:48.180 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.180 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.180 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.180 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.180 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.180 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.180 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.180 15:39:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.180 15:39:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.180 15:39:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.180 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.180 "name": "raid_bdev1", 00:12:48.180 "uuid": "cdb79f7f-17b5-425c-ad2b-f9ca3dacaa41", 00:12:48.180 "strip_size_kb": 0, 00:12:48.180 "state": "online", 00:12:48.180 "raid_level": "raid1", 00:12:48.180 "superblock": false, 00:12:48.180 "num_base_bdevs": 2, 00:12:48.180 "num_base_bdevs_discovered": 2, 00:12:48.180 "num_base_bdevs_operational": 2, 00:12:48.180 "process": { 00:12:48.180 "type": "rebuild", 00:12:48.180 "target": "spare", 00:12:48.180 "progress": { 00:12:48.180 "blocks": 20480, 00:12:48.180 "percent": 31 00:12:48.180 } 00:12:48.180 }, 00:12:48.180 "base_bdevs_list": [ 00:12:48.180 { 00:12:48.180 "name": "spare", 00:12:48.180 "uuid": "4417c360-26b1-5157-bc1f-6b9b85f74063", 00:12:48.180 "is_configured": true, 00:12:48.180 "data_offset": 0, 00:12:48.180 "data_size": 65536 00:12:48.180 }, 00:12:48.180 { 00:12:48.180 "name": "BaseBdev2", 00:12:48.180 "uuid": "f4b2a2a4-33e3-5e2b-817c-44d484309040", 00:12:48.180 "is_configured": true, 00:12:48.180 "data_offset": 0, 00:12:48.180 "data_size": 65536 00:12:48.180 } 00:12:48.180 ] 00:12:48.180 }' 00:12:48.180 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.180 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.180 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=359 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.440 "name": "raid_bdev1", 00:12:48.440 "uuid": "cdb79f7f-17b5-425c-ad2b-f9ca3dacaa41", 00:12:48.440 "strip_size_kb": 0, 00:12:48.440 "state": "online", 00:12:48.440 "raid_level": "raid1", 00:12:48.440 "superblock": false, 00:12:48.440 "num_base_bdevs": 2, 00:12:48.440 "num_base_bdevs_discovered": 2, 00:12:48.440 "num_base_bdevs_operational": 2, 00:12:48.440 "process": { 00:12:48.440 "type": "rebuild", 00:12:48.440 "target": "spare", 00:12:48.440 "progress": { 00:12:48.440 "blocks": 22528, 00:12:48.440 "percent": 34 00:12:48.440 } 00:12:48.440 }, 00:12:48.440 "base_bdevs_list": [ 00:12:48.440 { 00:12:48.440 "name": "spare", 00:12:48.440 "uuid": "4417c360-26b1-5157-bc1f-6b9b85f74063", 00:12:48.440 "is_configured": true, 00:12:48.440 "data_offset": 0, 00:12:48.440 "data_size": 65536 00:12:48.440 }, 00:12:48.440 { 00:12:48.440 "name": "BaseBdev2", 00:12:48.440 "uuid": "f4b2a2a4-33e3-5e2b-817c-44d484309040", 00:12:48.440 "is_configured": true, 00:12:48.440 "data_offset": 0, 00:12:48.440 "data_size": 65536 00:12:48.440 } 00:12:48.440 ] 00:12:48.440 }' 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.440 15:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.440 15:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.440 15:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:49.378 15:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:49.378 15:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.378 15:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.378 15:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.378 15:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.378 15:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.379 15:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.379 15:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.379 15:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.379 15:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.379 15:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.638 15:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.638 "name": "raid_bdev1", 00:12:49.638 "uuid": "cdb79f7f-17b5-425c-ad2b-f9ca3dacaa41", 00:12:49.638 "strip_size_kb": 0, 00:12:49.638 "state": "online", 00:12:49.638 "raid_level": "raid1", 00:12:49.638 "superblock": false, 00:12:49.638 "num_base_bdevs": 2, 00:12:49.638 "num_base_bdevs_discovered": 2, 00:12:49.638 "num_base_bdevs_operational": 2, 00:12:49.638 "process": { 00:12:49.638 "type": "rebuild", 00:12:49.638 "target": "spare", 00:12:49.638 "progress": { 00:12:49.638 "blocks": 45056, 00:12:49.638 "percent": 68 00:12:49.638 } 00:12:49.638 }, 00:12:49.638 "base_bdevs_list": [ 00:12:49.638 { 00:12:49.638 "name": "spare", 00:12:49.638 "uuid": "4417c360-26b1-5157-bc1f-6b9b85f74063", 00:12:49.638 "is_configured": true, 00:12:49.638 "data_offset": 0, 00:12:49.638 "data_size": 65536 00:12:49.638 }, 00:12:49.638 { 00:12:49.638 "name": "BaseBdev2", 00:12:49.638 "uuid": "f4b2a2a4-33e3-5e2b-817c-44d484309040", 00:12:49.638 "is_configured": true, 00:12:49.638 "data_offset": 0, 00:12:49.638 "data_size": 65536 00:12:49.638 } 00:12:49.638 ] 00:12:49.638 }' 00:12:49.638 15:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.638 15:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.638 15:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.638 15:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.638 15:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:50.576 [2024-11-25 15:39:48.988121] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:50.576 [2024-11-25 15:39:48.988261] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:50.576 [2024-11-25 15:39:48.988334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.576 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:50.576 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.576 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.576 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.576 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.576 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.576 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.576 15:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.576 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.576 15:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.576 15:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.576 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.576 "name": "raid_bdev1", 00:12:50.576 "uuid": "cdb79f7f-17b5-425c-ad2b-f9ca3dacaa41", 00:12:50.576 "strip_size_kb": 0, 00:12:50.576 "state": "online", 00:12:50.576 "raid_level": "raid1", 00:12:50.576 "superblock": false, 00:12:50.576 "num_base_bdevs": 2, 00:12:50.576 "num_base_bdevs_discovered": 2, 00:12:50.576 "num_base_bdevs_operational": 2, 00:12:50.576 "base_bdevs_list": [ 00:12:50.576 { 00:12:50.576 "name": "spare", 00:12:50.576 "uuid": "4417c360-26b1-5157-bc1f-6b9b85f74063", 00:12:50.576 "is_configured": true, 00:12:50.576 "data_offset": 0, 00:12:50.576 "data_size": 65536 00:12:50.576 }, 00:12:50.576 { 00:12:50.576 "name": "BaseBdev2", 00:12:50.576 "uuid": "f4b2a2a4-33e3-5e2b-817c-44d484309040", 00:12:50.576 "is_configured": true, 00:12:50.576 "data_offset": 0, 00:12:50.576 "data_size": 65536 00:12:50.576 } 00:12:50.576 ] 00:12:50.576 }' 00:12:50.576 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.835 "name": "raid_bdev1", 00:12:50.835 "uuid": "cdb79f7f-17b5-425c-ad2b-f9ca3dacaa41", 00:12:50.835 "strip_size_kb": 0, 00:12:50.835 "state": "online", 00:12:50.835 "raid_level": "raid1", 00:12:50.835 "superblock": false, 00:12:50.835 "num_base_bdevs": 2, 00:12:50.835 "num_base_bdevs_discovered": 2, 00:12:50.835 "num_base_bdevs_operational": 2, 00:12:50.835 "base_bdevs_list": [ 00:12:50.835 { 00:12:50.835 "name": "spare", 00:12:50.835 "uuid": "4417c360-26b1-5157-bc1f-6b9b85f74063", 00:12:50.835 "is_configured": true, 00:12:50.835 "data_offset": 0, 00:12:50.835 "data_size": 65536 00:12:50.835 }, 00:12:50.835 { 00:12:50.835 "name": "BaseBdev2", 00:12:50.835 "uuid": "f4b2a2a4-33e3-5e2b-817c-44d484309040", 00:12:50.835 "is_configured": true, 00:12:50.835 "data_offset": 0, 00:12:50.835 "data_size": 65536 00:12:50.835 } 00:12:50.835 ] 00:12:50.835 }' 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.835 "name": "raid_bdev1", 00:12:50.835 "uuid": "cdb79f7f-17b5-425c-ad2b-f9ca3dacaa41", 00:12:50.835 "strip_size_kb": 0, 00:12:50.835 "state": "online", 00:12:50.835 "raid_level": "raid1", 00:12:50.835 "superblock": false, 00:12:50.835 "num_base_bdevs": 2, 00:12:50.835 "num_base_bdevs_discovered": 2, 00:12:50.835 "num_base_bdevs_operational": 2, 00:12:50.835 "base_bdevs_list": [ 00:12:50.835 { 00:12:50.835 "name": "spare", 00:12:50.835 "uuid": "4417c360-26b1-5157-bc1f-6b9b85f74063", 00:12:50.835 "is_configured": true, 00:12:50.835 "data_offset": 0, 00:12:50.835 "data_size": 65536 00:12:50.835 }, 00:12:50.835 { 00:12:50.835 "name": "BaseBdev2", 00:12:50.835 "uuid": "f4b2a2a4-33e3-5e2b-817c-44d484309040", 00:12:50.835 "is_configured": true, 00:12:50.835 "data_offset": 0, 00:12:50.835 "data_size": 65536 00:12:50.835 } 00:12:50.835 ] 00:12:50.835 }' 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.835 15:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.404 [2024-11-25 15:39:49.857762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:51.404 [2024-11-25 15:39:49.857845] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.404 [2024-11-25 15:39:49.858032] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.404 [2024-11-25 15:39:49.858179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.404 [2024-11-25 15:39:49.858231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:51.404 15:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:51.664 /dev/nbd0 00:12:51.664 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:51.664 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:51.664 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:51.664 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:51.664 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.664 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.664 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:51.664 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:51.664 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.664 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.664 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.664 1+0 records in 00:12:51.664 1+0 records out 00:12:51.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256912 s, 15.9 MB/s 00:12:51.664 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.664 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:51.664 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.664 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.664 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:51.664 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.664 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:51.664 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:51.924 /dev/nbd1 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.924 1+0 records in 00:12:51.924 1+0 records out 00:12:51.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442811 s, 9.2 MB/s 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.924 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:52.184 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:52.184 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:52.184 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:52.184 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.184 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.184 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:52.184 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:52.184 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.184 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.184 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:52.444 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:52.444 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:52.444 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:52.444 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.444 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.444 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:52.444 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:52.444 15:39:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.444 15:39:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:52.444 15:39:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75017 00:12:52.444 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75017 ']' 00:12:52.444 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75017 00:12:52.444 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:52.444 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.444 15:39:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75017 00:12:52.444 15:39:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.444 15:39:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.444 15:39:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75017' 00:12:52.444 killing process with pid 75017 00:12:52.444 Received shutdown signal, test time was about 60.000000 seconds 00:12:52.444 00:12:52.444 Latency(us) 00:12:52.444 [2024-11-25T15:39:51.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.444 [2024-11-25T15:39:51.125Z] =================================================================================================================== 00:12:52.444 [2024-11-25T15:39:51.125Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:52.444 15:39:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75017 00:12:52.444 [2024-11-25 15:39:51.031019] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:52.444 15:39:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75017 00:12:52.706 [2024-11-25 15:39:51.320181] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:54.088 00:12:54.088 real 0m14.630s 00:12:54.088 user 0m16.765s 00:12:54.088 sys 0m2.776s 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.088 ************************************ 00:12:54.088 END TEST raid_rebuild_test 00:12:54.088 ************************************ 00:12:54.088 15:39:52 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:54.088 15:39:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:54.088 15:39:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.088 15:39:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:54.088 ************************************ 00:12:54.088 START TEST raid_rebuild_test_sb 00:12:54.088 ************************************ 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75430 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75430 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75430 ']' 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.088 15:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.088 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:54.088 Zero copy mechanism will not be used. 00:12:54.088 [2024-11-25 15:39:52.517077] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:12:54.088 [2024-11-25 15:39:52.517187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75430 ] 00:12:54.088 [2024-11-25 15:39:52.687747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.347 [2024-11-25 15:39:52.796249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.347 [2024-11-25 15:39:52.989118] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.347 [2024-11-25 15:39:52.989171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.918 BaseBdev1_malloc 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.918 [2024-11-25 15:39:53.365827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:54.918 [2024-11-25 15:39:53.365898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.918 [2024-11-25 15:39:53.365920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:54.918 [2024-11-25 15:39:53.365932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.918 [2024-11-25 15:39:53.368012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.918 [2024-11-25 15:39:53.368054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:54.918 BaseBdev1 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.918 BaseBdev2_malloc 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.918 [2024-11-25 15:39:53.418078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:54.918 [2024-11-25 15:39:53.418133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.918 [2024-11-25 15:39:53.418151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:54.918 [2024-11-25 15:39:53.418163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.918 [2024-11-25 15:39:53.420126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.918 [2024-11-25 15:39:53.420164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:54.918 BaseBdev2 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.918 spare_malloc 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.918 spare_delay 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.918 [2024-11-25 15:39:53.515179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:54.918 [2024-11-25 15:39:53.515236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.918 [2024-11-25 15:39:53.515255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:54.918 [2024-11-25 15:39:53.515265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.918 [2024-11-25 15:39:53.517235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.918 [2024-11-25 15:39:53.517275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:54.918 spare 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.918 [2024-11-25 15:39:53.527223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:54.918 [2024-11-25 15:39:53.528897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:54.918 [2024-11-25 15:39:53.529064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:54.918 [2024-11-25 15:39:53.529081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:54.918 [2024-11-25 15:39:53.529302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:54.918 [2024-11-25 15:39:53.529467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:54.918 [2024-11-25 15:39:53.529477] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:54.918 [2024-11-25 15:39:53.529618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.918 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:54.919 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.919 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.919 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.919 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.919 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.919 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.919 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.919 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.919 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.919 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.919 "name": "raid_bdev1", 00:12:54.919 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:12:54.919 "strip_size_kb": 0, 00:12:54.919 "state": "online", 00:12:54.919 "raid_level": "raid1", 00:12:54.919 "superblock": true, 00:12:54.919 "num_base_bdevs": 2, 00:12:54.919 "num_base_bdevs_discovered": 2, 00:12:54.919 "num_base_bdevs_operational": 2, 00:12:54.919 "base_bdevs_list": [ 00:12:54.919 { 00:12:54.919 "name": "BaseBdev1", 00:12:54.919 "uuid": "a13cdbd4-6af8-56d0-b6e6-c00ea955c41b", 00:12:54.919 "is_configured": true, 00:12:54.919 "data_offset": 2048, 00:12:54.919 "data_size": 63488 00:12:54.919 }, 00:12:54.919 { 00:12:54.919 "name": "BaseBdev2", 00:12:54.919 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:12:54.919 "is_configured": true, 00:12:54.919 "data_offset": 2048, 00:12:54.919 "data_size": 63488 00:12:54.919 } 00:12:54.919 ] 00:12:54.919 }' 00:12:54.919 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.919 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.487 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:55.487 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.487 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.487 15:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:55.487 [2024-11-25 15:39:53.974720] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.487 15:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:55.487 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:55.746 [2024-11-25 15:39:54.254074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:55.746 /dev/nbd0 00:12:55.746 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:55.746 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:55.746 15:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:55.746 15:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:55.746 15:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:55.746 15:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:55.746 15:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:55.746 15:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:55.746 15:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:55.746 15:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:55.746 15:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.746 1+0 records in 00:12:55.746 1+0 records out 00:12:55.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294143 s, 13.9 MB/s 00:12:55.746 15:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.746 15:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:55.747 15:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.747 15:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:55.747 15:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:55.747 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.747 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:55.747 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:55.747 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:55.747 15:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:59.940 63488+0 records in 00:12:59.940 63488+0 records out 00:12:59.940 32505856 bytes (33 MB, 31 MiB) copied, 3.53838 s, 9.2 MB/s 00:12:59.940 15:39:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:59.940 15:39:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.940 15:39:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:59.940 15:39:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.940 15:39:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:59.940 15:39:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.940 15:39:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:59.940 [2024-11-25 15:39:58.058886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.940 [2024-11-25 15:39:58.070969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.940 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.940 "name": "raid_bdev1", 00:12:59.940 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:12:59.940 "strip_size_kb": 0, 00:12:59.940 "state": "online", 00:12:59.940 "raid_level": "raid1", 00:12:59.940 "superblock": true, 00:12:59.940 "num_base_bdevs": 2, 00:12:59.940 "num_base_bdevs_discovered": 1, 00:12:59.940 "num_base_bdevs_operational": 1, 00:12:59.940 "base_bdevs_list": [ 00:12:59.940 { 00:12:59.940 "name": null, 00:12:59.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.940 "is_configured": false, 00:12:59.941 "data_offset": 0, 00:12:59.941 "data_size": 63488 00:12:59.941 }, 00:12:59.941 { 00:12:59.941 "name": "BaseBdev2", 00:12:59.941 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:12:59.941 "is_configured": true, 00:12:59.941 "data_offset": 2048, 00:12:59.941 "data_size": 63488 00:12:59.941 } 00:12:59.941 ] 00:12:59.941 }' 00:12:59.941 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.941 15:39:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.941 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:59.941 15:39:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.941 15:39:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.941 [2024-11-25 15:39:58.522193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.941 [2024-11-25 15:39:58.538506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:59.941 15:39:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.941 [2024-11-25 15:39:58.540395] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:59.941 15:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:00.879 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.879 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.879 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.879 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.879 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.879 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.879 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.879 15:39:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.879 15:39:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.146 15:39:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.146 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.146 "name": "raid_bdev1", 00:13:01.146 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:01.146 "strip_size_kb": 0, 00:13:01.146 "state": "online", 00:13:01.146 "raid_level": "raid1", 00:13:01.146 "superblock": true, 00:13:01.146 "num_base_bdevs": 2, 00:13:01.146 "num_base_bdevs_discovered": 2, 00:13:01.146 "num_base_bdevs_operational": 2, 00:13:01.146 "process": { 00:13:01.146 "type": "rebuild", 00:13:01.146 "target": "spare", 00:13:01.146 "progress": { 00:13:01.146 "blocks": 20480, 00:13:01.146 "percent": 32 00:13:01.146 } 00:13:01.146 }, 00:13:01.146 "base_bdevs_list": [ 00:13:01.146 { 00:13:01.146 "name": "spare", 00:13:01.146 "uuid": "8ddc2610-685a-59ff-aa6a-b18f285a2e04", 00:13:01.146 "is_configured": true, 00:13:01.146 "data_offset": 2048, 00:13:01.146 "data_size": 63488 00:13:01.146 }, 00:13:01.146 { 00:13:01.146 "name": "BaseBdev2", 00:13:01.146 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:01.146 "is_configured": true, 00:13:01.146 "data_offset": 2048, 00:13:01.146 "data_size": 63488 00:13:01.146 } 00:13:01.146 ] 00:13:01.146 }' 00:13:01.146 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.146 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.146 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.146 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.146 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:01.146 15:39:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.146 15:39:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.146 [2024-11-25 15:39:59.679767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.146 [2024-11-25 15:39:59.745218] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:01.146 [2024-11-25 15:39:59.745295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.146 [2024-11-25 15:39:59.745309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.146 [2024-11-25 15:39:59.745318] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:01.146 15:39:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.146 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:01.146 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.146 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.146 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.146 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.146 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:01.147 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.147 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.147 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.147 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.147 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.147 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.147 15:39:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.147 15:39:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.147 15:39:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.423 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.423 "name": "raid_bdev1", 00:13:01.423 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:01.423 "strip_size_kb": 0, 00:13:01.423 "state": "online", 00:13:01.423 "raid_level": "raid1", 00:13:01.423 "superblock": true, 00:13:01.423 "num_base_bdevs": 2, 00:13:01.423 "num_base_bdevs_discovered": 1, 00:13:01.423 "num_base_bdevs_operational": 1, 00:13:01.423 "base_bdevs_list": [ 00:13:01.423 { 00:13:01.423 "name": null, 00:13:01.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.423 "is_configured": false, 00:13:01.423 "data_offset": 0, 00:13:01.423 "data_size": 63488 00:13:01.423 }, 00:13:01.423 { 00:13:01.423 "name": "BaseBdev2", 00:13:01.423 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:01.423 "is_configured": true, 00:13:01.423 "data_offset": 2048, 00:13:01.423 "data_size": 63488 00:13:01.423 } 00:13:01.423 ] 00:13:01.423 }' 00:13:01.423 15:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.423 15:39:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.683 15:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:01.683 15:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.683 15:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:01.683 15:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:01.683 15:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.683 15:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.683 15:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.683 15:40:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.683 15:40:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.683 15:40:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.683 15:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.683 "name": "raid_bdev1", 00:13:01.683 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:01.683 "strip_size_kb": 0, 00:13:01.683 "state": "online", 00:13:01.683 "raid_level": "raid1", 00:13:01.683 "superblock": true, 00:13:01.683 "num_base_bdevs": 2, 00:13:01.683 "num_base_bdevs_discovered": 1, 00:13:01.683 "num_base_bdevs_operational": 1, 00:13:01.683 "base_bdevs_list": [ 00:13:01.683 { 00:13:01.683 "name": null, 00:13:01.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.683 "is_configured": false, 00:13:01.683 "data_offset": 0, 00:13:01.683 "data_size": 63488 00:13:01.683 }, 00:13:01.683 { 00:13:01.683 "name": "BaseBdev2", 00:13:01.683 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:01.683 "is_configured": true, 00:13:01.683 "data_offset": 2048, 00:13:01.683 "data_size": 63488 00:13:01.683 } 00:13:01.683 ] 00:13:01.683 }' 00:13:01.683 15:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.683 15:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:01.683 15:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.683 15:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:01.683 15:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:01.683 15:40:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.683 15:40:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.943 [2024-11-25 15:40:00.362986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.943 [2024-11-25 15:40:00.378408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:01.943 15:40:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.943 15:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:01.943 [2024-11-25 15:40:00.380225] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:02.884 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.884 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.884 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.884 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.884 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.884 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.884 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.884 15:40:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.884 15:40:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.884 15:40:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.884 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.884 "name": "raid_bdev1", 00:13:02.884 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:02.884 "strip_size_kb": 0, 00:13:02.884 "state": "online", 00:13:02.884 "raid_level": "raid1", 00:13:02.884 "superblock": true, 00:13:02.884 "num_base_bdevs": 2, 00:13:02.884 "num_base_bdevs_discovered": 2, 00:13:02.884 "num_base_bdevs_operational": 2, 00:13:02.884 "process": { 00:13:02.884 "type": "rebuild", 00:13:02.884 "target": "spare", 00:13:02.884 "progress": { 00:13:02.884 "blocks": 20480, 00:13:02.884 "percent": 32 00:13:02.884 } 00:13:02.884 }, 00:13:02.884 "base_bdevs_list": [ 00:13:02.884 { 00:13:02.884 "name": "spare", 00:13:02.884 "uuid": "8ddc2610-685a-59ff-aa6a-b18f285a2e04", 00:13:02.884 "is_configured": true, 00:13:02.884 "data_offset": 2048, 00:13:02.884 "data_size": 63488 00:13:02.884 }, 00:13:02.884 { 00:13:02.884 "name": "BaseBdev2", 00:13:02.884 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:02.884 "is_configured": true, 00:13:02.884 "data_offset": 2048, 00:13:02.884 "data_size": 63488 00:13:02.884 } 00:13:02.884 ] 00:13:02.884 }' 00:13:02.884 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.884 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.884 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.884 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.884 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:02.884 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:02.884 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:02.885 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:02.885 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:02.885 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:02.885 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=374 00:13:02.885 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:02.885 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.885 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.885 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.885 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.885 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.885 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.885 15:40:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.885 15:40:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.885 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.885 15:40:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.145 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.145 "name": "raid_bdev1", 00:13:03.145 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:03.145 "strip_size_kb": 0, 00:13:03.145 "state": "online", 00:13:03.145 "raid_level": "raid1", 00:13:03.145 "superblock": true, 00:13:03.145 "num_base_bdevs": 2, 00:13:03.145 "num_base_bdevs_discovered": 2, 00:13:03.145 "num_base_bdevs_operational": 2, 00:13:03.145 "process": { 00:13:03.145 "type": "rebuild", 00:13:03.145 "target": "spare", 00:13:03.145 "progress": { 00:13:03.145 "blocks": 22528, 00:13:03.145 "percent": 35 00:13:03.145 } 00:13:03.145 }, 00:13:03.145 "base_bdevs_list": [ 00:13:03.145 { 00:13:03.145 "name": "spare", 00:13:03.145 "uuid": "8ddc2610-685a-59ff-aa6a-b18f285a2e04", 00:13:03.145 "is_configured": true, 00:13:03.145 "data_offset": 2048, 00:13:03.145 "data_size": 63488 00:13:03.145 }, 00:13:03.145 { 00:13:03.145 "name": "BaseBdev2", 00:13:03.145 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:03.145 "is_configured": true, 00:13:03.145 "data_offset": 2048, 00:13:03.145 "data_size": 63488 00:13:03.145 } 00:13:03.145 ] 00:13:03.145 }' 00:13:03.145 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.145 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.145 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.145 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.145 15:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:04.084 15:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:04.084 15:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.084 15:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.084 15:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.084 15:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.084 15:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.084 15:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.084 15:40:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.084 15:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.084 15:40:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.084 15:40:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.084 15:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.084 "name": "raid_bdev1", 00:13:04.084 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:04.084 "strip_size_kb": 0, 00:13:04.084 "state": "online", 00:13:04.084 "raid_level": "raid1", 00:13:04.084 "superblock": true, 00:13:04.084 "num_base_bdevs": 2, 00:13:04.084 "num_base_bdevs_discovered": 2, 00:13:04.084 "num_base_bdevs_operational": 2, 00:13:04.084 "process": { 00:13:04.084 "type": "rebuild", 00:13:04.084 "target": "spare", 00:13:04.084 "progress": { 00:13:04.084 "blocks": 45056, 00:13:04.084 "percent": 70 00:13:04.084 } 00:13:04.084 }, 00:13:04.084 "base_bdevs_list": [ 00:13:04.084 { 00:13:04.084 "name": "spare", 00:13:04.084 "uuid": "8ddc2610-685a-59ff-aa6a-b18f285a2e04", 00:13:04.084 "is_configured": true, 00:13:04.084 "data_offset": 2048, 00:13:04.084 "data_size": 63488 00:13:04.084 }, 00:13:04.084 { 00:13:04.084 "name": "BaseBdev2", 00:13:04.084 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:04.084 "is_configured": true, 00:13:04.084 "data_offset": 2048, 00:13:04.084 "data_size": 63488 00:13:04.084 } 00:13:04.084 ] 00:13:04.084 }' 00:13:04.084 15:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.344 15:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.344 15:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.344 15:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.344 15:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:04.913 [2024-11-25 15:40:03.492988] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:04.913 [2024-11-25 15:40:03.493065] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:04.913 [2024-11-25 15:40:03.493160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.173 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:05.173 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.173 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.173 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.173 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.173 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.173 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.173 15:40:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.173 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.173 15:40:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.173 15:40:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.433 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.433 "name": "raid_bdev1", 00:13:05.433 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:05.433 "strip_size_kb": 0, 00:13:05.433 "state": "online", 00:13:05.433 "raid_level": "raid1", 00:13:05.433 "superblock": true, 00:13:05.433 "num_base_bdevs": 2, 00:13:05.433 "num_base_bdevs_discovered": 2, 00:13:05.433 "num_base_bdevs_operational": 2, 00:13:05.433 "base_bdevs_list": [ 00:13:05.433 { 00:13:05.433 "name": "spare", 00:13:05.433 "uuid": "8ddc2610-685a-59ff-aa6a-b18f285a2e04", 00:13:05.433 "is_configured": true, 00:13:05.433 "data_offset": 2048, 00:13:05.433 "data_size": 63488 00:13:05.433 }, 00:13:05.433 { 00:13:05.433 "name": "BaseBdev2", 00:13:05.433 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:05.433 "is_configured": true, 00:13:05.433 "data_offset": 2048, 00:13:05.433 "data_size": 63488 00:13:05.433 } 00:13:05.433 ] 00:13:05.433 }' 00:13:05.433 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.433 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:05.433 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.433 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:05.433 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:05.433 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.433 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.433 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.433 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.433 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.433 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.433 15:40:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.433 15:40:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.433 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.433 15:40:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.433 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.433 "name": "raid_bdev1", 00:13:05.433 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:05.433 "strip_size_kb": 0, 00:13:05.433 "state": "online", 00:13:05.433 "raid_level": "raid1", 00:13:05.433 "superblock": true, 00:13:05.433 "num_base_bdevs": 2, 00:13:05.433 "num_base_bdevs_discovered": 2, 00:13:05.433 "num_base_bdevs_operational": 2, 00:13:05.433 "base_bdevs_list": [ 00:13:05.433 { 00:13:05.433 "name": "spare", 00:13:05.433 "uuid": "8ddc2610-685a-59ff-aa6a-b18f285a2e04", 00:13:05.433 "is_configured": true, 00:13:05.433 "data_offset": 2048, 00:13:05.433 "data_size": 63488 00:13:05.433 }, 00:13:05.433 { 00:13:05.433 "name": "BaseBdev2", 00:13:05.433 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:05.433 "is_configured": true, 00:13:05.433 "data_offset": 2048, 00:13:05.433 "data_size": 63488 00:13:05.433 } 00:13:05.434 ] 00:13:05.434 }' 00:13:05.434 15:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.434 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:05.434 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.434 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.434 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:05.434 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.434 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.434 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.434 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.434 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.434 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.434 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.434 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.434 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.434 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.434 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.434 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.434 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.434 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.693 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.693 "name": "raid_bdev1", 00:13:05.693 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:05.693 "strip_size_kb": 0, 00:13:05.693 "state": "online", 00:13:05.693 "raid_level": "raid1", 00:13:05.693 "superblock": true, 00:13:05.693 "num_base_bdevs": 2, 00:13:05.693 "num_base_bdevs_discovered": 2, 00:13:05.693 "num_base_bdevs_operational": 2, 00:13:05.693 "base_bdevs_list": [ 00:13:05.693 { 00:13:05.693 "name": "spare", 00:13:05.693 "uuid": "8ddc2610-685a-59ff-aa6a-b18f285a2e04", 00:13:05.693 "is_configured": true, 00:13:05.693 "data_offset": 2048, 00:13:05.693 "data_size": 63488 00:13:05.693 }, 00:13:05.693 { 00:13:05.693 "name": "BaseBdev2", 00:13:05.693 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:05.693 "is_configured": true, 00:13:05.693 "data_offset": 2048, 00:13:05.693 "data_size": 63488 00:13:05.693 } 00:13:05.693 ] 00:13:05.693 }' 00:13:05.693 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.693 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.952 [2024-11-25 15:40:04.533957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.952 [2024-11-25 15:40:04.534069] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.952 [2024-11-25 15:40:04.534172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.952 [2024-11-25 15:40:04.534260] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.952 [2024-11-25 15:40:04.534316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:05.952 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:06.213 /dev/nbd0 00:13:06.213 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:06.213 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:06.213 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:06.213 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:06.213 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:06.213 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:06.213 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:06.213 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:06.213 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:06.213 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:06.213 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.213 1+0 records in 00:13:06.213 1+0 records out 00:13:06.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021955 s, 18.7 MB/s 00:13:06.213 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.213 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:06.213 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.213 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:06.213 15:40:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:06.213 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.213 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:06.213 15:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:06.473 /dev/nbd1 00:13:06.473 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:06.473 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:06.473 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:06.473 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:06.473 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:06.473 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:06.473 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:06.473 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:06.473 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:06.473 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:06.473 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.473 1+0 records in 00:13:06.473 1+0 records out 00:13:06.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403546 s, 10.2 MB/s 00:13:06.473 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.473 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:06.473 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.473 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:06.473 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:06.473 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.473 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:06.473 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:06.733 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:06.733 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.733 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:06.733 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.733 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:06.733 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.733 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:06.994 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:06.994 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:06.994 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:06.994 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.994 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.994 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:06.994 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:06.994 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.994 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.994 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:06.994 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:06.994 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:06.994 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:06.994 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.994 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.994 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:07.254 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:07.254 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.254 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:07.254 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:07.254 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.254 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.254 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.254 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:07.254 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.254 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.254 [2024-11-25 15:40:05.697252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:07.254 [2024-11-25 15:40:05.697313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.254 [2024-11-25 15:40:05.697338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:07.254 [2024-11-25 15:40:05.697347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.254 [2024-11-25 15:40:05.699635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.254 [2024-11-25 15:40:05.699713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:07.254 [2024-11-25 15:40:05.699828] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:07.254 [2024-11-25 15:40:05.699911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.254 [2024-11-25 15:40:05.700105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:07.254 spare 00:13:07.254 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.254 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:07.254 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.254 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.254 [2024-11-25 15:40:05.800049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:07.254 [2024-11-25 15:40:05.800084] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:07.255 [2024-11-25 15:40:05.800376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:07.255 [2024-11-25 15:40:05.800549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:07.255 [2024-11-25 15:40:05.800559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:07.255 [2024-11-25 15:40:05.800726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.255 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.255 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:07.255 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.255 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.255 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.255 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.255 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:07.255 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.255 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.255 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.255 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.255 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.255 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.255 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.255 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.255 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.255 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.255 "name": "raid_bdev1", 00:13:07.255 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:07.255 "strip_size_kb": 0, 00:13:07.255 "state": "online", 00:13:07.255 "raid_level": "raid1", 00:13:07.255 "superblock": true, 00:13:07.255 "num_base_bdevs": 2, 00:13:07.255 "num_base_bdevs_discovered": 2, 00:13:07.255 "num_base_bdevs_operational": 2, 00:13:07.255 "base_bdevs_list": [ 00:13:07.255 { 00:13:07.255 "name": "spare", 00:13:07.255 "uuid": "8ddc2610-685a-59ff-aa6a-b18f285a2e04", 00:13:07.255 "is_configured": true, 00:13:07.255 "data_offset": 2048, 00:13:07.255 "data_size": 63488 00:13:07.255 }, 00:13:07.255 { 00:13:07.255 "name": "BaseBdev2", 00:13:07.255 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:07.255 "is_configured": true, 00:13:07.255 "data_offset": 2048, 00:13:07.255 "data_size": 63488 00:13:07.255 } 00:13:07.255 ] 00:13:07.255 }' 00:13:07.255 15:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.255 15:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.824 "name": "raid_bdev1", 00:13:07.824 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:07.824 "strip_size_kb": 0, 00:13:07.824 "state": "online", 00:13:07.824 "raid_level": "raid1", 00:13:07.824 "superblock": true, 00:13:07.824 "num_base_bdevs": 2, 00:13:07.824 "num_base_bdevs_discovered": 2, 00:13:07.824 "num_base_bdevs_operational": 2, 00:13:07.824 "base_bdevs_list": [ 00:13:07.824 { 00:13:07.824 "name": "spare", 00:13:07.824 "uuid": "8ddc2610-685a-59ff-aa6a-b18f285a2e04", 00:13:07.824 "is_configured": true, 00:13:07.824 "data_offset": 2048, 00:13:07.824 "data_size": 63488 00:13:07.824 }, 00:13:07.824 { 00:13:07.824 "name": "BaseBdev2", 00:13:07.824 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:07.824 "is_configured": true, 00:13:07.824 "data_offset": 2048, 00:13:07.824 "data_size": 63488 00:13:07.824 } 00:13:07.824 ] 00:13:07.824 }' 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.824 [2024-11-25 15:40:06.384159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.824 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.824 "name": "raid_bdev1", 00:13:07.824 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:07.824 "strip_size_kb": 0, 00:13:07.824 "state": "online", 00:13:07.824 "raid_level": "raid1", 00:13:07.825 "superblock": true, 00:13:07.825 "num_base_bdevs": 2, 00:13:07.825 "num_base_bdevs_discovered": 1, 00:13:07.825 "num_base_bdevs_operational": 1, 00:13:07.825 "base_bdevs_list": [ 00:13:07.825 { 00:13:07.825 "name": null, 00:13:07.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.825 "is_configured": false, 00:13:07.825 "data_offset": 0, 00:13:07.825 "data_size": 63488 00:13:07.825 }, 00:13:07.825 { 00:13:07.825 "name": "BaseBdev2", 00:13:07.825 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:07.825 "is_configured": true, 00:13:07.825 "data_offset": 2048, 00:13:07.825 "data_size": 63488 00:13:07.825 } 00:13:07.825 ] 00:13:07.825 }' 00:13:07.825 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.825 15:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.085 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:08.085 15:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.085 15:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.085 [2024-11-25 15:40:06.739606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:08.085 [2024-11-25 15:40:06.739853] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:08.085 [2024-11-25 15:40:06.739924] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:08.085 [2024-11-25 15:40:06.740032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:08.085 [2024-11-25 15:40:06.754915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:08.085 15:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.085 15:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:08.085 [2024-11-25 15:40:06.756748] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.466 "name": "raid_bdev1", 00:13:09.466 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:09.466 "strip_size_kb": 0, 00:13:09.466 "state": "online", 00:13:09.466 "raid_level": "raid1", 00:13:09.466 "superblock": true, 00:13:09.466 "num_base_bdevs": 2, 00:13:09.466 "num_base_bdevs_discovered": 2, 00:13:09.466 "num_base_bdevs_operational": 2, 00:13:09.466 "process": { 00:13:09.466 "type": "rebuild", 00:13:09.466 "target": "spare", 00:13:09.466 "progress": { 00:13:09.466 "blocks": 20480, 00:13:09.466 "percent": 32 00:13:09.466 } 00:13:09.466 }, 00:13:09.466 "base_bdevs_list": [ 00:13:09.466 { 00:13:09.466 "name": "spare", 00:13:09.466 "uuid": "8ddc2610-685a-59ff-aa6a-b18f285a2e04", 00:13:09.466 "is_configured": true, 00:13:09.466 "data_offset": 2048, 00:13:09.466 "data_size": 63488 00:13:09.466 }, 00:13:09.466 { 00:13:09.466 "name": "BaseBdev2", 00:13:09.466 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:09.466 "is_configured": true, 00:13:09.466 "data_offset": 2048, 00:13:09.466 "data_size": 63488 00:13:09.466 } 00:13:09.466 ] 00:13:09.466 }' 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.466 [2024-11-25 15:40:07.920533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:09.466 [2024-11-25 15:40:07.961453] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:09.466 [2024-11-25 15:40:07.961584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.466 [2024-11-25 15:40:07.961619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:09.466 [2024-11-25 15:40:07.961641] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.466 15:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.466 15:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.466 15:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.466 15:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.466 15:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.466 15:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.466 15:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.466 "name": "raid_bdev1", 00:13:09.466 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:09.466 "strip_size_kb": 0, 00:13:09.466 "state": "online", 00:13:09.466 "raid_level": "raid1", 00:13:09.466 "superblock": true, 00:13:09.466 "num_base_bdevs": 2, 00:13:09.466 "num_base_bdevs_discovered": 1, 00:13:09.466 "num_base_bdevs_operational": 1, 00:13:09.466 "base_bdevs_list": [ 00:13:09.466 { 00:13:09.466 "name": null, 00:13:09.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.466 "is_configured": false, 00:13:09.466 "data_offset": 0, 00:13:09.466 "data_size": 63488 00:13:09.466 }, 00:13:09.466 { 00:13:09.466 "name": "BaseBdev2", 00:13:09.466 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:09.466 "is_configured": true, 00:13:09.466 "data_offset": 2048, 00:13:09.466 "data_size": 63488 00:13:09.466 } 00:13:09.466 ] 00:13:09.466 }' 00:13:09.466 15:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.466 15:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.035 15:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:10.035 15:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.035 15:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.035 [2024-11-25 15:40:08.467154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:10.035 [2024-11-25 15:40:08.467277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.035 [2024-11-25 15:40:08.467316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:10.035 [2024-11-25 15:40:08.467364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.035 [2024-11-25 15:40:08.467846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.035 [2024-11-25 15:40:08.467915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:10.035 [2024-11-25 15:40:08.468062] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:10.035 [2024-11-25 15:40:08.468105] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:10.035 [2024-11-25 15:40:08.468155] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:10.035 [2024-11-25 15:40:08.468207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:10.035 [2024-11-25 15:40:08.483054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:10.035 spare 00:13:10.035 15:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.035 15:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:10.035 [2024-11-25 15:40:08.484873] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:10.973 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.973 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.973 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.973 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.973 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.973 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.973 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.973 15:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.973 15:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.973 15:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.973 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.973 "name": "raid_bdev1", 00:13:10.973 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:10.973 "strip_size_kb": 0, 00:13:10.973 "state": "online", 00:13:10.973 "raid_level": "raid1", 00:13:10.973 "superblock": true, 00:13:10.973 "num_base_bdevs": 2, 00:13:10.973 "num_base_bdevs_discovered": 2, 00:13:10.973 "num_base_bdevs_operational": 2, 00:13:10.973 "process": { 00:13:10.973 "type": "rebuild", 00:13:10.973 "target": "spare", 00:13:10.973 "progress": { 00:13:10.973 "blocks": 20480, 00:13:10.973 "percent": 32 00:13:10.973 } 00:13:10.973 }, 00:13:10.973 "base_bdevs_list": [ 00:13:10.973 { 00:13:10.973 "name": "spare", 00:13:10.973 "uuid": "8ddc2610-685a-59ff-aa6a-b18f285a2e04", 00:13:10.973 "is_configured": true, 00:13:10.973 "data_offset": 2048, 00:13:10.973 "data_size": 63488 00:13:10.973 }, 00:13:10.973 { 00:13:10.973 "name": "BaseBdev2", 00:13:10.973 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:10.973 "is_configured": true, 00:13:10.973 "data_offset": 2048, 00:13:10.973 "data_size": 63488 00:13:10.973 } 00:13:10.973 ] 00:13:10.973 }' 00:13:10.973 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.973 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.973 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.974 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.974 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:10.974 15:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.974 15:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.974 [2024-11-25 15:40:09.640730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.232 [2024-11-25 15:40:09.689678] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:11.233 [2024-11-25 15:40:09.689778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.233 [2024-11-25 15:40:09.689833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.233 [2024-11-25 15:40:09.689854] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:11.233 15:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.233 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:11.233 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.233 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.233 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.233 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.233 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:11.233 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.233 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.233 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.233 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.233 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.233 15:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.233 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.233 15:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.233 15:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.233 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.233 "name": "raid_bdev1", 00:13:11.233 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:11.233 "strip_size_kb": 0, 00:13:11.233 "state": "online", 00:13:11.233 "raid_level": "raid1", 00:13:11.233 "superblock": true, 00:13:11.233 "num_base_bdevs": 2, 00:13:11.233 "num_base_bdevs_discovered": 1, 00:13:11.233 "num_base_bdevs_operational": 1, 00:13:11.233 "base_bdevs_list": [ 00:13:11.233 { 00:13:11.233 "name": null, 00:13:11.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.233 "is_configured": false, 00:13:11.233 "data_offset": 0, 00:13:11.233 "data_size": 63488 00:13:11.233 }, 00:13:11.233 { 00:13:11.233 "name": "BaseBdev2", 00:13:11.233 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:11.233 "is_configured": true, 00:13:11.233 "data_offset": 2048, 00:13:11.233 "data_size": 63488 00:13:11.233 } 00:13:11.233 ] 00:13:11.233 }' 00:13:11.233 15:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.233 15:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.492 15:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.492 15:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.492 15:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.492 15:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.492 15:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.492 15:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.492 15:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.492 15:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.492 15:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.492 15:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.492 15:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.492 "name": "raid_bdev1", 00:13:11.492 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:11.492 "strip_size_kb": 0, 00:13:11.492 "state": "online", 00:13:11.492 "raid_level": "raid1", 00:13:11.492 "superblock": true, 00:13:11.492 "num_base_bdevs": 2, 00:13:11.492 "num_base_bdevs_discovered": 1, 00:13:11.492 "num_base_bdevs_operational": 1, 00:13:11.492 "base_bdevs_list": [ 00:13:11.492 { 00:13:11.492 "name": null, 00:13:11.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.492 "is_configured": false, 00:13:11.492 "data_offset": 0, 00:13:11.492 "data_size": 63488 00:13:11.492 }, 00:13:11.492 { 00:13:11.492 "name": "BaseBdev2", 00:13:11.492 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:11.492 "is_configured": true, 00:13:11.492 "data_offset": 2048, 00:13:11.492 "data_size": 63488 00:13:11.493 } 00:13:11.493 ] 00:13:11.493 }' 00:13:11.493 15:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.755 15:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.755 15:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.755 15:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.755 15:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:11.755 15:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.755 15:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.755 15:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.755 15:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:11.755 15:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.755 15:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.755 [2024-11-25 15:40:10.254349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:11.755 [2024-11-25 15:40:10.254407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.755 [2024-11-25 15:40:10.254428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:11.755 [2024-11-25 15:40:10.254447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.755 [2024-11-25 15:40:10.254902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.755 [2024-11-25 15:40:10.254934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:11.755 [2024-11-25 15:40:10.255038] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:11.755 [2024-11-25 15:40:10.255052] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:11.755 [2024-11-25 15:40:10.255062] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:11.755 [2024-11-25 15:40:10.255072] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:11.755 BaseBdev1 00:13:11.755 15:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.755 15:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:12.697 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:12.697 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.697 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.697 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.697 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.697 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:12.697 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.697 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.697 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.697 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.697 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.697 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.697 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.697 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.697 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.697 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.697 "name": "raid_bdev1", 00:13:12.697 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:12.697 "strip_size_kb": 0, 00:13:12.697 "state": "online", 00:13:12.697 "raid_level": "raid1", 00:13:12.697 "superblock": true, 00:13:12.697 "num_base_bdevs": 2, 00:13:12.697 "num_base_bdevs_discovered": 1, 00:13:12.697 "num_base_bdevs_operational": 1, 00:13:12.697 "base_bdevs_list": [ 00:13:12.697 { 00:13:12.697 "name": null, 00:13:12.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.697 "is_configured": false, 00:13:12.697 "data_offset": 0, 00:13:12.697 "data_size": 63488 00:13:12.697 }, 00:13:12.697 { 00:13:12.697 "name": "BaseBdev2", 00:13:12.697 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:12.697 "is_configured": true, 00:13:12.697 "data_offset": 2048, 00:13:12.697 "data_size": 63488 00:13:12.697 } 00:13:12.697 ] 00:13:12.697 }' 00:13:12.697 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.697 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.268 "name": "raid_bdev1", 00:13:13.268 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:13.268 "strip_size_kb": 0, 00:13:13.268 "state": "online", 00:13:13.268 "raid_level": "raid1", 00:13:13.268 "superblock": true, 00:13:13.268 "num_base_bdevs": 2, 00:13:13.268 "num_base_bdevs_discovered": 1, 00:13:13.268 "num_base_bdevs_operational": 1, 00:13:13.268 "base_bdevs_list": [ 00:13:13.268 { 00:13:13.268 "name": null, 00:13:13.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.268 "is_configured": false, 00:13:13.268 "data_offset": 0, 00:13:13.268 "data_size": 63488 00:13:13.268 }, 00:13:13.268 { 00:13:13.268 "name": "BaseBdev2", 00:13:13.268 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:13.268 "is_configured": true, 00:13:13.268 "data_offset": 2048, 00:13:13.268 "data_size": 63488 00:13:13.268 } 00:13:13.268 ] 00:13:13.268 }' 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.268 [2024-11-25 15:40:11.827669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.268 [2024-11-25 15:40:11.827829] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:13.268 [2024-11-25 15:40:11.827846] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:13.268 request: 00:13:13.268 { 00:13:13.268 "base_bdev": "BaseBdev1", 00:13:13.268 "raid_bdev": "raid_bdev1", 00:13:13.268 "method": "bdev_raid_add_base_bdev", 00:13:13.268 "req_id": 1 00:13:13.268 } 00:13:13.268 Got JSON-RPC error response 00:13:13.268 response: 00:13:13.268 { 00:13:13.268 "code": -22, 00:13:13.268 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:13.268 } 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:13.268 15:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:14.209 15:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:14.209 15:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.209 15:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.209 15:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.209 15:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.209 15:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:14.209 15:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.209 15:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.209 15:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.209 15:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.209 15:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.209 15:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.209 15:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.209 15:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.209 15:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.469 15:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.469 "name": "raid_bdev1", 00:13:14.469 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:14.469 "strip_size_kb": 0, 00:13:14.469 "state": "online", 00:13:14.469 "raid_level": "raid1", 00:13:14.469 "superblock": true, 00:13:14.469 "num_base_bdevs": 2, 00:13:14.469 "num_base_bdevs_discovered": 1, 00:13:14.469 "num_base_bdevs_operational": 1, 00:13:14.469 "base_bdevs_list": [ 00:13:14.469 { 00:13:14.469 "name": null, 00:13:14.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.469 "is_configured": false, 00:13:14.469 "data_offset": 0, 00:13:14.469 "data_size": 63488 00:13:14.469 }, 00:13:14.469 { 00:13:14.469 "name": "BaseBdev2", 00:13:14.469 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:14.469 "is_configured": true, 00:13:14.469 "data_offset": 2048, 00:13:14.469 "data_size": 63488 00:13:14.469 } 00:13:14.469 ] 00:13:14.469 }' 00:13:14.469 15:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.469 15:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.729 15:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:14.729 15:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.729 15:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:14.729 15:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:14.729 15:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.729 15:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.729 15:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.729 15:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.729 15:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.729 15:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.729 15:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.729 "name": "raid_bdev1", 00:13:14.729 "uuid": "18024258-7ec7-49f8-b217-f91deec27562", 00:13:14.729 "strip_size_kb": 0, 00:13:14.729 "state": "online", 00:13:14.729 "raid_level": "raid1", 00:13:14.729 "superblock": true, 00:13:14.729 "num_base_bdevs": 2, 00:13:14.729 "num_base_bdevs_discovered": 1, 00:13:14.729 "num_base_bdevs_operational": 1, 00:13:14.729 "base_bdevs_list": [ 00:13:14.729 { 00:13:14.729 "name": null, 00:13:14.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.729 "is_configured": false, 00:13:14.729 "data_offset": 0, 00:13:14.729 "data_size": 63488 00:13:14.729 }, 00:13:14.729 { 00:13:14.729 "name": "BaseBdev2", 00:13:14.729 "uuid": "b426da23-af51-5237-83e3-9479425db939", 00:13:14.729 "is_configured": true, 00:13:14.729 "data_offset": 2048, 00:13:14.729 "data_size": 63488 00:13:14.729 } 00:13:14.729 ] 00:13:14.729 }' 00:13:14.729 15:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.729 15:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:14.729 15:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.990 15:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:14.990 15:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75430 00:13:14.990 15:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75430 ']' 00:13:14.990 15:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75430 00:13:14.990 15:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:14.991 15:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:14.991 15:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75430 00:13:14.991 15:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:14.991 15:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:14.991 15:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75430' 00:13:14.991 killing process with pid 75430 00:13:14.991 15:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75430 00:13:14.991 Received shutdown signal, test time was about 60.000000 seconds 00:13:14.991 00:13:14.991 Latency(us) 00:13:14.991 [2024-11-25T15:40:13.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.991 [2024-11-25T15:40:13.672Z] =================================================================================================================== 00:13:14.991 [2024-11-25T15:40:13.672Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:14.991 [2024-11-25 15:40:13.462159] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:14.991 [2024-11-25 15:40:13.462303] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.991 [2024-11-25 15:40:13.462376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.991 [2024-11-25 15:40:13.462422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:14.991 15:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75430 00:13:15.251 [2024-11-25 15:40:13.745838] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:16.190 15:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:16.190 00:13:16.190 real 0m22.371s 00:13:16.190 user 0m27.573s 00:13:16.190 sys 0m3.396s 00:13:16.190 15:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.190 15:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.190 ************************************ 00:13:16.190 END TEST raid_rebuild_test_sb 00:13:16.190 ************************************ 00:13:16.190 15:40:14 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:16.190 15:40:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:16.190 15:40:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.190 15:40:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:16.190 ************************************ 00:13:16.190 START TEST raid_rebuild_test_io 00:13:16.190 ************************************ 00:13:16.190 15:40:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:16.190 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:16.190 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:16.190 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:16.190 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:16.190 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:16.190 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:16.190 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:16.190 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:16.190 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76149 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76149 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76149 ']' 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:16.449 15:40:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.449 [2024-11-25 15:40:14.959242] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:13:16.449 [2024-11-25 15:40:14.959435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:16.449 Zero copy mechanism will not be used. 00:13:16.449 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76149 ] 00:13:16.708 [2024-11-25 15:40:15.130157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.708 [2024-11-25 15:40:15.238909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.966 [2024-11-25 15:40:15.436543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.966 [2024-11-25 15:40:15.436628] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.225 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.225 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:17.225 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:17.225 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:17.225 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.225 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.225 BaseBdev1_malloc 00:13:17.225 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.225 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:17.225 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.225 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.225 [2024-11-25 15:40:15.823340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:17.225 [2024-11-25 15:40:15.823407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.225 [2024-11-25 15:40:15.823431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:17.226 [2024-11-25 15:40:15.823442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.226 [2024-11-25 15:40:15.825417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.226 [2024-11-25 15:40:15.825535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:17.226 BaseBdev1 00:13:17.226 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.226 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:17.226 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:17.226 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.226 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.226 BaseBdev2_malloc 00:13:17.226 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.226 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:17.226 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.226 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.226 [2024-11-25 15:40:15.875958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:17.226 [2024-11-25 15:40:15.876030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.226 [2024-11-25 15:40:15.876048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:17.226 [2024-11-25 15:40:15.876059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.226 [2024-11-25 15:40:15.878012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.226 [2024-11-25 15:40:15.878063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:17.226 BaseBdev2 00:13:17.226 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.226 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:17.226 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.226 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.486 spare_malloc 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.486 spare_delay 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.486 [2024-11-25 15:40:15.974735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:17.486 [2024-11-25 15:40:15.974853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.486 [2024-11-25 15:40:15.974894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:17.486 [2024-11-25 15:40:15.974905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.486 [2024-11-25 15:40:15.976927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.486 [2024-11-25 15:40:15.976968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:17.486 spare 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.486 [2024-11-25 15:40:15.986760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:17.486 [2024-11-25 15:40:15.988486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.486 [2024-11-25 15:40:15.988567] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:17.486 [2024-11-25 15:40:15.988581] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:17.486 [2024-11-25 15:40:15.988830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:17.486 [2024-11-25 15:40:15.988963] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:17.486 [2024-11-25 15:40:15.988973] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:17.486 [2024-11-25 15:40:15.989187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.486 15:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.486 15:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.486 15:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.486 15:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.486 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.486 "name": "raid_bdev1", 00:13:17.486 "uuid": "f516563d-24fd-481a-8877-e558419b687b", 00:13:17.486 "strip_size_kb": 0, 00:13:17.486 "state": "online", 00:13:17.486 "raid_level": "raid1", 00:13:17.486 "superblock": false, 00:13:17.486 "num_base_bdevs": 2, 00:13:17.486 "num_base_bdevs_discovered": 2, 00:13:17.486 "num_base_bdevs_operational": 2, 00:13:17.486 "base_bdevs_list": [ 00:13:17.486 { 00:13:17.486 "name": "BaseBdev1", 00:13:17.486 "uuid": "bdf87671-2378-5b75-9800-aa1265a2efbc", 00:13:17.486 "is_configured": true, 00:13:17.486 "data_offset": 0, 00:13:17.486 "data_size": 65536 00:13:17.486 }, 00:13:17.486 { 00:13:17.486 "name": "BaseBdev2", 00:13:17.486 "uuid": "7f8b173d-d5b0-5c8f-8ec9-8e5b85190f71", 00:13:17.486 "is_configured": true, 00:13:17.486 "data_offset": 0, 00:13:17.486 "data_size": 65536 00:13:17.486 } 00:13:17.486 ] 00:13:17.486 }' 00:13:17.486 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.486 15:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.745 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:17.745 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:17.745 15:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.745 15:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.745 [2024-11-25 15:40:16.410305] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.004 15:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.004 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:18.004 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.004 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:18.004 15:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.004 15:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.004 15:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.004 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:18.004 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:18.004 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:18.004 15:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.004 15:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.004 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:18.004 [2024-11-25 15:40:16.513838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:18.004 15:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.005 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:18.005 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.005 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.005 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.005 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.005 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:18.005 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.005 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.005 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.005 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.005 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.005 15:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.005 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.005 15:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.005 15:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.005 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.005 "name": "raid_bdev1", 00:13:18.005 "uuid": "f516563d-24fd-481a-8877-e558419b687b", 00:13:18.005 "strip_size_kb": 0, 00:13:18.005 "state": "online", 00:13:18.005 "raid_level": "raid1", 00:13:18.005 "superblock": false, 00:13:18.005 "num_base_bdevs": 2, 00:13:18.005 "num_base_bdevs_discovered": 1, 00:13:18.005 "num_base_bdevs_operational": 1, 00:13:18.005 "base_bdevs_list": [ 00:13:18.005 { 00:13:18.005 "name": null, 00:13:18.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.005 "is_configured": false, 00:13:18.005 "data_offset": 0, 00:13:18.005 "data_size": 65536 00:13:18.005 }, 00:13:18.005 { 00:13:18.005 "name": "BaseBdev2", 00:13:18.005 "uuid": "7f8b173d-d5b0-5c8f-8ec9-8e5b85190f71", 00:13:18.005 "is_configured": true, 00:13:18.005 "data_offset": 0, 00:13:18.005 "data_size": 65536 00:13:18.005 } 00:13:18.005 ] 00:13:18.005 }' 00:13:18.005 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.005 15:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.005 [2024-11-25 15:40:16.601511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:18.005 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:18.005 Zero copy mechanism will not be used. 00:13:18.005 Running I/O for 60 seconds... 00:13:18.582 15:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:18.582 15:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.582 15:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.582 [2024-11-25 15:40:16.972271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.582 15:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.582 15:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:18.582 [2024-11-25 15:40:17.031456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:18.582 [2024-11-25 15:40:17.033379] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:18.582 [2024-11-25 15:40:17.145399] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:18.582 [2024-11-25 15:40:17.145829] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:18.842 [2024-11-25 15:40:17.349216] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:18.842 [2024-11-25 15:40:17.349644] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:19.102 174.00 IOPS, 522.00 MiB/s [2024-11-25T15:40:17.783Z] [2024-11-25 15:40:17.686191] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:19.102 [2024-11-25 15:40:17.692113] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:19.366 [2024-11-25 15:40:17.905392] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:19.366 [2024-11-25 15:40:17.905652] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:19.366 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.366 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.366 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.366 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.366 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.366 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.366 15:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.366 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.366 15:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.366 15:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.632 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.632 "name": "raid_bdev1", 00:13:19.632 "uuid": "f516563d-24fd-481a-8877-e558419b687b", 00:13:19.632 "strip_size_kb": 0, 00:13:19.632 "state": "online", 00:13:19.632 "raid_level": "raid1", 00:13:19.632 "superblock": false, 00:13:19.632 "num_base_bdevs": 2, 00:13:19.632 "num_base_bdevs_discovered": 2, 00:13:19.632 "num_base_bdevs_operational": 2, 00:13:19.632 "process": { 00:13:19.632 "type": "rebuild", 00:13:19.632 "target": "spare", 00:13:19.632 "progress": { 00:13:19.632 "blocks": 10240, 00:13:19.632 "percent": 15 00:13:19.632 } 00:13:19.632 }, 00:13:19.632 "base_bdevs_list": [ 00:13:19.632 { 00:13:19.632 "name": "spare", 00:13:19.632 "uuid": "9d63ab40-7e12-5bdd-9fe3-570dff53eace", 00:13:19.632 "is_configured": true, 00:13:19.632 "data_offset": 0, 00:13:19.632 "data_size": 65536 00:13:19.632 }, 00:13:19.632 { 00:13:19.632 "name": "BaseBdev2", 00:13:19.632 "uuid": "7f8b173d-d5b0-5c8f-8ec9-8e5b85190f71", 00:13:19.632 "is_configured": true, 00:13:19.632 "data_offset": 0, 00:13:19.632 "data_size": 65536 00:13:19.632 } 00:13:19.632 ] 00:13:19.632 }' 00:13:19.632 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.632 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.632 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.632 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.632 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:19.632 15:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.632 15:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.632 [2024-11-25 15:40:18.154869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:19.632 [2024-11-25 15:40:18.219765] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:19.892 [2024-11-25 15:40:18.326670] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:19.892 [2024-11-25 15:40:18.334290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.892 [2024-11-25 15:40:18.334387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:19.892 [2024-11-25 15:40:18.334416] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:19.892 [2024-11-25 15:40:18.381329] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:19.892 15:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.892 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:19.892 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.892 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.892 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.892 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.892 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:19.892 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.892 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.892 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.892 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.892 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.892 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.892 15:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.892 15:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.892 15:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.892 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.892 "name": "raid_bdev1", 00:13:19.892 "uuid": "f516563d-24fd-481a-8877-e558419b687b", 00:13:19.892 "strip_size_kb": 0, 00:13:19.892 "state": "online", 00:13:19.892 "raid_level": "raid1", 00:13:19.892 "superblock": false, 00:13:19.892 "num_base_bdevs": 2, 00:13:19.892 "num_base_bdevs_discovered": 1, 00:13:19.892 "num_base_bdevs_operational": 1, 00:13:19.892 "base_bdevs_list": [ 00:13:19.892 { 00:13:19.892 "name": null, 00:13:19.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.892 "is_configured": false, 00:13:19.892 "data_offset": 0, 00:13:19.892 "data_size": 65536 00:13:19.892 }, 00:13:19.892 { 00:13:19.892 "name": "BaseBdev2", 00:13:19.892 "uuid": "7f8b173d-d5b0-5c8f-8ec9-8e5b85190f71", 00:13:19.892 "is_configured": true, 00:13:19.892 "data_offset": 0, 00:13:19.892 "data_size": 65536 00:13:19.892 } 00:13:19.892 ] 00:13:19.892 }' 00:13:19.892 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.892 15:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.413 153.50 IOPS, 460.50 MiB/s [2024-11-25T15:40:19.094Z] 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:20.413 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.413 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:20.413 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:20.413 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.413 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.413 15:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.413 15:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.413 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.413 15:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.413 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.413 "name": "raid_bdev1", 00:13:20.413 "uuid": "f516563d-24fd-481a-8877-e558419b687b", 00:13:20.413 "strip_size_kb": 0, 00:13:20.413 "state": "online", 00:13:20.413 "raid_level": "raid1", 00:13:20.413 "superblock": false, 00:13:20.413 "num_base_bdevs": 2, 00:13:20.413 "num_base_bdevs_discovered": 1, 00:13:20.413 "num_base_bdevs_operational": 1, 00:13:20.413 "base_bdevs_list": [ 00:13:20.413 { 00:13:20.413 "name": null, 00:13:20.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.413 "is_configured": false, 00:13:20.413 "data_offset": 0, 00:13:20.413 "data_size": 65536 00:13:20.413 }, 00:13:20.413 { 00:13:20.413 "name": "BaseBdev2", 00:13:20.413 "uuid": "7f8b173d-d5b0-5c8f-8ec9-8e5b85190f71", 00:13:20.413 "is_configured": true, 00:13:20.413 "data_offset": 0, 00:13:20.413 "data_size": 65536 00:13:20.413 } 00:13:20.413 ] 00:13:20.413 }' 00:13:20.413 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.413 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:20.413 15:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.413 15:40:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:20.413 15:40:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:20.413 15:40:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.413 15:40:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.413 [2024-11-25 15:40:19.041109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.413 15:40:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.413 15:40:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:20.673 [2024-11-25 15:40:19.099764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:20.673 [2024-11-25 15:40:19.101667] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:20.673 [2024-11-25 15:40:19.203894] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:20.673 [2024-11-25 15:40:19.204490] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:20.933 [2024-11-25 15:40:19.412879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:20.933 [2024-11-25 15:40:19.413238] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:21.193 176.00 IOPS, 528.00 MiB/s [2024-11-25T15:40:19.874Z] [2024-11-25 15:40:19.659218] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:21.193 [2024-11-25 15:40:19.785355] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:21.193 [2024-11-25 15:40:19.785733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:21.453 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.453 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.453 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.454 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.454 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.454 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.454 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.454 15:40:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.454 15:40:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.454 15:40:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.454 [2024-11-25 15:40:20.120407] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:21.714 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.714 "name": "raid_bdev1", 00:13:21.714 "uuid": "f516563d-24fd-481a-8877-e558419b687b", 00:13:21.714 "strip_size_kb": 0, 00:13:21.714 "state": "online", 00:13:21.714 "raid_level": "raid1", 00:13:21.714 "superblock": false, 00:13:21.714 "num_base_bdevs": 2, 00:13:21.714 "num_base_bdevs_discovered": 2, 00:13:21.714 "num_base_bdevs_operational": 2, 00:13:21.714 "process": { 00:13:21.714 "type": "rebuild", 00:13:21.714 "target": "spare", 00:13:21.714 "progress": { 00:13:21.714 "blocks": 12288, 00:13:21.714 "percent": 18 00:13:21.714 } 00:13:21.714 }, 00:13:21.714 "base_bdevs_list": [ 00:13:21.714 { 00:13:21.714 "name": "spare", 00:13:21.714 "uuid": "9d63ab40-7e12-5bdd-9fe3-570dff53eace", 00:13:21.714 "is_configured": true, 00:13:21.714 "data_offset": 0, 00:13:21.714 "data_size": 65536 00:13:21.714 }, 00:13:21.714 { 00:13:21.714 "name": "BaseBdev2", 00:13:21.714 "uuid": "7f8b173d-d5b0-5c8f-8ec9-8e5b85190f71", 00:13:21.714 "is_configured": true, 00:13:21.714 "data_offset": 0, 00:13:21.714 "data_size": 65536 00:13:21.714 } 00:13:21.714 ] 00:13:21.714 }' 00:13:21.714 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.714 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=393 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.715 "name": "raid_bdev1", 00:13:21.715 "uuid": "f516563d-24fd-481a-8877-e558419b687b", 00:13:21.715 "strip_size_kb": 0, 00:13:21.715 "state": "online", 00:13:21.715 "raid_level": "raid1", 00:13:21.715 "superblock": false, 00:13:21.715 "num_base_bdevs": 2, 00:13:21.715 "num_base_bdevs_discovered": 2, 00:13:21.715 "num_base_bdevs_operational": 2, 00:13:21.715 "process": { 00:13:21.715 "type": "rebuild", 00:13:21.715 "target": "spare", 00:13:21.715 "progress": { 00:13:21.715 "blocks": 14336, 00:13:21.715 "percent": 21 00:13:21.715 } 00:13:21.715 }, 00:13:21.715 "base_bdevs_list": [ 00:13:21.715 { 00:13:21.715 "name": "spare", 00:13:21.715 "uuid": "9d63ab40-7e12-5bdd-9fe3-570dff53eace", 00:13:21.715 "is_configured": true, 00:13:21.715 "data_offset": 0, 00:13:21.715 "data_size": 65536 00:13:21.715 }, 00:13:21.715 { 00:13:21.715 "name": "BaseBdev2", 00:13:21.715 "uuid": "7f8b173d-d5b0-5c8f-8ec9-8e5b85190f71", 00:13:21.715 "is_configured": true, 00:13:21.715 "data_offset": 0, 00:13:21.715 "data_size": 65536 00:13:21.715 } 00:13:21.715 ] 00:13:21.715 }' 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.715 [2024-11-25 15:40:20.335309] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.715 15:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:22.233 149.00 IOPS, 447.00 MiB/s [2024-11-25T15:40:20.914Z] [2024-11-25 15:40:20.687675] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:22.233 [2024-11-25 15:40:20.688045] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:22.493 [2024-11-25 15:40:21.010688] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:22.754 15:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.754 15:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.754 15:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.754 15:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.754 15:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.754 15:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.754 15:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.754 15:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.754 15:40:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.754 15:40:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.754 15:40:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.754 15:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.754 "name": "raid_bdev1", 00:13:22.754 "uuid": "f516563d-24fd-481a-8877-e558419b687b", 00:13:22.754 "strip_size_kb": 0, 00:13:22.754 "state": "online", 00:13:22.754 "raid_level": "raid1", 00:13:22.754 "superblock": false, 00:13:22.754 "num_base_bdevs": 2, 00:13:22.754 "num_base_bdevs_discovered": 2, 00:13:22.754 "num_base_bdevs_operational": 2, 00:13:22.754 "process": { 00:13:22.754 "type": "rebuild", 00:13:22.754 "target": "spare", 00:13:22.754 "progress": { 00:13:22.754 "blocks": 32768, 00:13:22.754 "percent": 50 00:13:22.754 } 00:13:22.754 }, 00:13:22.754 "base_bdevs_list": [ 00:13:22.754 { 00:13:22.754 "name": "spare", 00:13:22.754 "uuid": "9d63ab40-7e12-5bdd-9fe3-570dff53eace", 00:13:22.754 "is_configured": true, 00:13:22.754 "data_offset": 0, 00:13:22.754 "data_size": 65536 00:13:22.754 }, 00:13:22.754 { 00:13:22.754 "name": "BaseBdev2", 00:13:22.754 "uuid": "7f8b173d-d5b0-5c8f-8ec9-8e5b85190f71", 00:13:22.754 "is_configured": true, 00:13:22.754 "data_offset": 0, 00:13:22.754 "data_size": 65536 00:13:22.754 } 00:13:22.754 ] 00:13:22.754 }' 00:13:22.754 15:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.014 15:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.014 15:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.014 15:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.014 15:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:23.955 129.80 IOPS, 389.40 MiB/s [2024-11-25T15:40:22.636Z] [2024-11-25 15:40:22.370640] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:23.955 15:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:23.955 15:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.955 15:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.955 15:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.955 15:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.955 15:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.955 15:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.955 15:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.955 15:40:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.955 15:40:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.955 15:40:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.955 15:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.955 "name": "raid_bdev1", 00:13:23.955 "uuid": "f516563d-24fd-481a-8877-e558419b687b", 00:13:23.955 "strip_size_kb": 0, 00:13:23.955 "state": "online", 00:13:23.955 "raid_level": "raid1", 00:13:23.955 "superblock": false, 00:13:23.955 "num_base_bdevs": 2, 00:13:23.955 "num_base_bdevs_discovered": 2, 00:13:23.955 "num_base_bdevs_operational": 2, 00:13:23.955 "process": { 00:13:23.955 "type": "rebuild", 00:13:23.955 "target": "spare", 00:13:23.955 "progress": { 00:13:23.955 "blocks": 53248, 00:13:23.955 "percent": 81 00:13:23.955 } 00:13:23.955 }, 00:13:23.955 "base_bdevs_list": [ 00:13:23.955 { 00:13:23.955 "name": "spare", 00:13:23.955 "uuid": "9d63ab40-7e12-5bdd-9fe3-570dff53eace", 00:13:23.955 "is_configured": true, 00:13:23.955 "data_offset": 0, 00:13:23.955 "data_size": 65536 00:13:23.955 }, 00:13:23.955 { 00:13:23.955 "name": "BaseBdev2", 00:13:23.955 "uuid": "7f8b173d-d5b0-5c8f-8ec9-8e5b85190f71", 00:13:23.955 "is_configured": true, 00:13:23.955 "data_offset": 0, 00:13:23.955 "data_size": 65536 00:13:23.955 } 00:13:23.955 ] 00:13:23.955 }' 00:13:23.955 15:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.955 116.67 IOPS, 350.00 MiB/s [2024-11-25T15:40:22.636Z] 15:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.955 15:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.215 15:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.215 15:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:24.475 [2024-11-25 15:40:23.134299] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:24.736 [2024-11-25 15:40:23.234109] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:24.736 [2024-11-25 15:40:23.236043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.996 105.14 IOPS, 315.43 MiB/s [2024-11-25T15:40:23.677Z] 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:24.996 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.996 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.996 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.996 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.996 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.996 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.996 15:40:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.996 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.996 15:40:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.256 "name": "raid_bdev1", 00:13:25.256 "uuid": "f516563d-24fd-481a-8877-e558419b687b", 00:13:25.256 "strip_size_kb": 0, 00:13:25.256 "state": "online", 00:13:25.256 "raid_level": "raid1", 00:13:25.256 "superblock": false, 00:13:25.256 "num_base_bdevs": 2, 00:13:25.256 "num_base_bdevs_discovered": 2, 00:13:25.256 "num_base_bdevs_operational": 2, 00:13:25.256 "base_bdevs_list": [ 00:13:25.256 { 00:13:25.256 "name": "spare", 00:13:25.256 "uuid": "9d63ab40-7e12-5bdd-9fe3-570dff53eace", 00:13:25.256 "is_configured": true, 00:13:25.256 "data_offset": 0, 00:13:25.256 "data_size": 65536 00:13:25.256 }, 00:13:25.256 { 00:13:25.256 "name": "BaseBdev2", 00:13:25.256 "uuid": "7f8b173d-d5b0-5c8f-8ec9-8e5b85190f71", 00:13:25.256 "is_configured": true, 00:13:25.256 "data_offset": 0, 00:13:25.256 "data_size": 65536 00:13:25.256 } 00:13:25.256 ] 00:13:25.256 }' 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.256 "name": "raid_bdev1", 00:13:25.256 "uuid": "f516563d-24fd-481a-8877-e558419b687b", 00:13:25.256 "strip_size_kb": 0, 00:13:25.256 "state": "online", 00:13:25.256 "raid_level": "raid1", 00:13:25.256 "superblock": false, 00:13:25.256 "num_base_bdevs": 2, 00:13:25.256 "num_base_bdevs_discovered": 2, 00:13:25.256 "num_base_bdevs_operational": 2, 00:13:25.256 "base_bdevs_list": [ 00:13:25.256 { 00:13:25.256 "name": "spare", 00:13:25.256 "uuid": "9d63ab40-7e12-5bdd-9fe3-570dff53eace", 00:13:25.256 "is_configured": true, 00:13:25.256 "data_offset": 0, 00:13:25.256 "data_size": 65536 00:13:25.256 }, 00:13:25.256 { 00:13:25.256 "name": "BaseBdev2", 00:13:25.256 "uuid": "7f8b173d-d5b0-5c8f-8ec9-8e5b85190f71", 00:13:25.256 "is_configured": true, 00:13:25.256 "data_offset": 0, 00:13:25.256 "data_size": 65536 00:13:25.256 } 00:13:25.256 ] 00:13:25.256 }' 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.256 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.516 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.516 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:25.516 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.516 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.516 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.516 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.516 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:25.516 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.516 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.516 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.516 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.516 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.516 15:40:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.516 15:40:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.516 15:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.516 15:40:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.516 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.516 "name": "raid_bdev1", 00:13:25.516 "uuid": "f516563d-24fd-481a-8877-e558419b687b", 00:13:25.516 "strip_size_kb": 0, 00:13:25.516 "state": "online", 00:13:25.516 "raid_level": "raid1", 00:13:25.516 "superblock": false, 00:13:25.516 "num_base_bdevs": 2, 00:13:25.516 "num_base_bdevs_discovered": 2, 00:13:25.516 "num_base_bdevs_operational": 2, 00:13:25.516 "base_bdevs_list": [ 00:13:25.516 { 00:13:25.516 "name": "spare", 00:13:25.516 "uuid": "9d63ab40-7e12-5bdd-9fe3-570dff53eace", 00:13:25.516 "is_configured": true, 00:13:25.516 "data_offset": 0, 00:13:25.516 "data_size": 65536 00:13:25.516 }, 00:13:25.516 { 00:13:25.516 "name": "BaseBdev2", 00:13:25.516 "uuid": "7f8b173d-d5b0-5c8f-8ec9-8e5b85190f71", 00:13:25.516 "is_configured": true, 00:13:25.517 "data_offset": 0, 00:13:25.517 "data_size": 65536 00:13:25.517 } 00:13:25.517 ] 00:13:25.517 }' 00:13:25.517 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.517 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.777 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:25.777 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.777 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.777 [2024-11-25 15:40:24.414593] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:25.777 [2024-11-25 15:40:24.414621] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.777 00:13:25.777 Latency(us) 00:13:25.777 [2024-11-25T15:40:24.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.777 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:25.777 raid_bdev1 : 7.86 97.35 292.06 0.00 0.00 14595.87 300.49 130957.53 00:13:25.777 [2024-11-25T15:40:24.458Z] =================================================================================================================== 00:13:25.777 [2024-11-25T15:40:24.458Z] Total : 97.35 292.06 0.00 0.00 14595.87 300.49 130957.53 00:13:26.037 [2024-11-25 15:40:24.466957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.037 [2024-11-25 15:40:24.466996] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.037 [2024-11-25 15:40:24.467102] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:26.037 [2024-11-25 15:40:24.467116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:26.037 { 00:13:26.037 "results": [ 00:13:26.037 { 00:13:26.037 "job": "raid_bdev1", 00:13:26.037 "core_mask": "0x1", 00:13:26.037 "workload": "randrw", 00:13:26.037 "percentage": 50, 00:13:26.037 "status": "finished", 00:13:26.037 "queue_depth": 2, 00:13:26.037 "io_size": 3145728, 00:13:26.037 "runtime": 7.858098, 00:13:26.037 "iops": 97.35180192458786, 00:13:26.037 "mibps": 292.0554057737636, 00:13:26.037 "io_failed": 0, 00:13:26.037 "io_timeout": 0, 00:13:26.037 "avg_latency_us": 14595.869262779348, 00:13:26.037 "min_latency_us": 300.49257641921395, 00:13:26.037 "max_latency_us": 130957.52663755459 00:13:26.037 } 00:13:26.037 ], 00:13:26.037 "core_count": 1 00:13:26.037 } 00:13:26.037 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.037 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.037 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:26.037 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.037 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.037 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.037 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:26.037 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:26.037 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:26.037 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:26.037 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:26.037 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:26.037 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:26.037 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:26.037 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:26.037 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:26.037 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:26.037 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:26.037 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:26.297 /dev/nbd0 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:26.298 1+0 records in 00:13:26.298 1+0 records out 00:13:26.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458378 s, 8.9 MB/s 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:26.298 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:26.298 /dev/nbd1 00:13:26.558 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:26.558 15:40:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:26.558 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:26.558 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:26.558 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:26.558 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:26.558 15:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:26.558 15:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:26.558 15:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:26.558 15:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:26.558 15:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:26.558 1+0 records in 00:13:26.558 1+0 records out 00:13:26.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302593 s, 13.5 MB/s 00:13:26.558 15:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.558 15:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:26.558 15:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.558 15:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:26.558 15:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:26.558 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:26.558 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:26.558 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:26.558 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:26.558 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:26.558 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:26.558 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:26.558 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:26.558 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.558 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:26.818 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:26.818 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:26.818 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:26.818 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.818 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.818 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:26.818 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:26.818 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.818 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:26.818 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:26.818 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:26.818 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:26.818 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:26.818 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.818 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:27.078 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:27.078 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:27.078 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:27.078 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.078 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.078 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:27.078 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:27.078 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.079 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:27.079 15:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76149 00:13:27.079 15:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76149 ']' 00:13:27.079 15:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76149 00:13:27.079 15:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:27.079 15:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.079 15:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76149 00:13:27.079 15:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.079 15:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.079 15:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76149' 00:13:27.079 killing process with pid 76149 00:13:27.079 15:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76149 00:13:27.079 Received shutdown signal, test time was about 9.054478 seconds 00:13:27.079 00:13:27.079 Latency(us) 00:13:27.079 [2024-11-25T15:40:25.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.079 [2024-11-25T15:40:25.760Z] =================================================================================================================== 00:13:27.079 [2024-11-25T15:40:25.760Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:27.079 [2024-11-25 15:40:25.640571] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:27.079 15:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76149 00:13:27.339 [2024-11-25 15:40:25.866710] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:28.718 15:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:28.718 00:13:28.718 real 0m12.119s 00:13:28.718 user 0m15.247s 00:13:28.718 sys 0m1.458s 00:13:28.718 15:40:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.718 15:40:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.718 ************************************ 00:13:28.718 END TEST raid_rebuild_test_io 00:13:28.718 ************************************ 00:13:28.718 15:40:27 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:28.718 15:40:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:28.718 15:40:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.718 15:40:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:28.718 ************************************ 00:13:28.718 START TEST raid_rebuild_test_sb_io 00:13:28.718 ************************************ 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:28.718 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76525 00:13:28.719 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:28.719 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76525 00:13:28.719 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76525 ']' 00:13:28.719 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.719 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.719 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.719 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.719 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.719 [2024-11-25 15:40:27.148055] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:13:28.719 [2024-11-25 15:40:27.148251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:28.719 Zero copy mechanism will not be used. 00:13:28.719 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76525 ] 00:13:28.719 [2024-11-25 15:40:27.319183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.978 [2024-11-25 15:40:27.425229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.978 [2024-11-25 15:40:27.620151] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.978 [2024-11-25 15:40:27.620248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.547 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.547 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:29.547 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.547 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:29.547 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.547 15:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.547 BaseBdev1_malloc 00:13:29.547 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.547 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:29.547 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.547 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.547 [2024-11-25 15:40:28.009492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:29.547 [2024-11-25 15:40:28.009571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.547 [2024-11-25 15:40:28.009594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:29.547 [2024-11-25 15:40:28.009605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.547 [2024-11-25 15:40:28.011591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.547 [2024-11-25 15:40:28.011631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:29.547 BaseBdev1 00:13:29.547 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.547 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.547 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:29.547 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.547 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.547 BaseBdev2_malloc 00:13:29.547 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.547 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:29.547 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.547 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.547 [2024-11-25 15:40:28.062245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:29.547 [2024-11-25 15:40:28.062297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.547 [2024-11-25 15:40:28.062330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:29.547 [2024-11-25 15:40:28.062342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.547 [2024-11-25 15:40:28.064318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.547 [2024-11-25 15:40:28.064425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:29.547 BaseBdev2 00:13:29.547 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.547 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.548 spare_malloc 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.548 spare_delay 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.548 [2024-11-25 15:40:28.136047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:29.548 [2024-11-25 15:40:28.136139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.548 [2024-11-25 15:40:28.136190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:29.548 [2024-11-25 15:40:28.136220] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.548 [2024-11-25 15:40:28.138204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.548 [2024-11-25 15:40:28.138285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:29.548 spare 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.548 [2024-11-25 15:40:28.148084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.548 [2024-11-25 15:40:28.149741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.548 [2024-11-25 15:40:28.149904] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:29.548 [2024-11-25 15:40:28.149920] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:29.548 [2024-11-25 15:40:28.150157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:29.548 [2024-11-25 15:40:28.150307] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:29.548 [2024-11-25 15:40:28.150315] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:29.548 [2024-11-25 15:40:28.150457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.548 "name": "raid_bdev1", 00:13:29.548 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:29.548 "strip_size_kb": 0, 00:13:29.548 "state": "online", 00:13:29.548 "raid_level": "raid1", 00:13:29.548 "superblock": true, 00:13:29.548 "num_base_bdevs": 2, 00:13:29.548 "num_base_bdevs_discovered": 2, 00:13:29.548 "num_base_bdevs_operational": 2, 00:13:29.548 "base_bdevs_list": [ 00:13:29.548 { 00:13:29.548 "name": "BaseBdev1", 00:13:29.548 "uuid": "36d55a4a-7dbf-5e92-836e-d9b451dc8276", 00:13:29.548 "is_configured": true, 00:13:29.548 "data_offset": 2048, 00:13:29.548 "data_size": 63488 00:13:29.548 }, 00:13:29.548 { 00:13:29.548 "name": "BaseBdev2", 00:13:29.548 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:29.548 "is_configured": true, 00:13:29.548 "data_offset": 2048, 00:13:29.548 "data_size": 63488 00:13:29.548 } 00:13:29.548 ] 00:13:29.548 }' 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.548 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.118 [2024-11-25 15:40:28.643455] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.118 [2024-11-25 15:40:28.727068] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.118 "name": "raid_bdev1", 00:13:30.118 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:30.118 "strip_size_kb": 0, 00:13:30.118 "state": "online", 00:13:30.118 "raid_level": "raid1", 00:13:30.118 "superblock": true, 00:13:30.118 "num_base_bdevs": 2, 00:13:30.118 "num_base_bdevs_discovered": 1, 00:13:30.118 "num_base_bdevs_operational": 1, 00:13:30.118 "base_bdevs_list": [ 00:13:30.118 { 00:13:30.118 "name": null, 00:13:30.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.118 "is_configured": false, 00:13:30.118 "data_offset": 0, 00:13:30.118 "data_size": 63488 00:13:30.118 }, 00:13:30.118 { 00:13:30.118 "name": "BaseBdev2", 00:13:30.118 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:30.118 "is_configured": true, 00:13:30.118 "data_offset": 2048, 00:13:30.118 "data_size": 63488 00:13:30.118 } 00:13:30.118 ] 00:13:30.118 }' 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.118 15:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.380 [2024-11-25 15:40:28.827129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:30.380 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:30.380 Zero copy mechanism will not be used. 00:13:30.380 Running I/O for 60 seconds... 00:13:30.640 15:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:30.640 15:40:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.640 15:40:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.640 [2024-11-25 15:40:29.105223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:30.640 15:40:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.640 15:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:30.640 [2024-11-25 15:40:29.148496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:30.640 [2024-11-25 15:40:29.150375] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:30.640 [2024-11-25 15:40:29.268440] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:30.640 [2024-11-25 15:40:29.269076] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:30.899 [2024-11-25 15:40:29.476522] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:30.899 [2024-11-25 15:40:29.476858] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:31.159 [2024-11-25 15:40:29.808474] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:31.419 164.00 IOPS, 492.00 MiB/s [2024-11-25T15:40:30.100Z] [2024-11-25 15:40:30.018751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:31.419 [2024-11-25 15:40:30.019080] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:31.678 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.678 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.678 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.678 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.678 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.678 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.678 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.678 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.678 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.678 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.678 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.678 "name": "raid_bdev1", 00:13:31.678 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:31.678 "strip_size_kb": 0, 00:13:31.678 "state": "online", 00:13:31.678 "raid_level": "raid1", 00:13:31.678 "superblock": true, 00:13:31.678 "num_base_bdevs": 2, 00:13:31.678 "num_base_bdevs_discovered": 2, 00:13:31.678 "num_base_bdevs_operational": 2, 00:13:31.678 "process": { 00:13:31.678 "type": "rebuild", 00:13:31.678 "target": "spare", 00:13:31.678 "progress": { 00:13:31.678 "blocks": 10240, 00:13:31.678 "percent": 16 00:13:31.678 } 00:13:31.678 }, 00:13:31.678 "base_bdevs_list": [ 00:13:31.678 { 00:13:31.678 "name": "spare", 00:13:31.678 "uuid": "bb0007c2-9768-565a-8dd5-4904f4deb68c", 00:13:31.678 "is_configured": true, 00:13:31.678 "data_offset": 2048, 00:13:31.678 "data_size": 63488 00:13:31.678 }, 00:13:31.678 { 00:13:31.678 "name": "BaseBdev2", 00:13:31.678 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:31.678 "is_configured": true, 00:13:31.678 "data_offset": 2048, 00:13:31.678 "data_size": 63488 00:13:31.678 } 00:13:31.678 ] 00:13:31.678 }' 00:13:31.678 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.678 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.678 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.678 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.678 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:31.678 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.678 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.678 [2024-11-25 15:40:30.282131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:31.938 [2024-11-25 15:40:30.388131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:31.938 [2024-11-25 15:40:30.488976] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:31.938 [2024-11-25 15:40:30.496258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.938 [2024-11-25 15:40:30.496307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:31.938 [2024-11-25 15:40:30.496326] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:31.938 [2024-11-25 15:40:30.540329] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:31.938 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.938 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:31.938 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.938 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.938 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.938 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.938 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:31.938 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.938 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.938 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.938 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.938 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.938 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.938 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.938 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.938 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.938 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.938 "name": "raid_bdev1", 00:13:31.938 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:31.938 "strip_size_kb": 0, 00:13:31.938 "state": "online", 00:13:31.938 "raid_level": "raid1", 00:13:31.938 "superblock": true, 00:13:31.938 "num_base_bdevs": 2, 00:13:31.938 "num_base_bdevs_discovered": 1, 00:13:31.938 "num_base_bdevs_operational": 1, 00:13:31.938 "base_bdevs_list": [ 00:13:31.938 { 00:13:31.938 "name": null, 00:13:31.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.938 "is_configured": false, 00:13:31.938 "data_offset": 0, 00:13:31.938 "data_size": 63488 00:13:31.938 }, 00:13:31.938 { 00:13:31.938 "name": "BaseBdev2", 00:13:31.938 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:31.938 "is_configured": true, 00:13:31.938 "data_offset": 2048, 00:13:31.938 "data_size": 63488 00:13:31.938 } 00:13:31.938 ] 00:13:31.938 }' 00:13:31.938 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.938 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.457 158.50 IOPS, 475.50 MiB/s [2024-11-25T15:40:31.138Z] 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:32.457 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.457 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:32.457 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:32.457 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.457 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.457 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.457 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.457 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.457 15:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.457 15:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.457 "name": "raid_bdev1", 00:13:32.457 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:32.457 "strip_size_kb": 0, 00:13:32.457 "state": "online", 00:13:32.457 "raid_level": "raid1", 00:13:32.457 "superblock": true, 00:13:32.457 "num_base_bdevs": 2, 00:13:32.457 "num_base_bdevs_discovered": 1, 00:13:32.457 "num_base_bdevs_operational": 1, 00:13:32.457 "base_bdevs_list": [ 00:13:32.457 { 00:13:32.457 "name": null, 00:13:32.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.457 "is_configured": false, 00:13:32.457 "data_offset": 0, 00:13:32.457 "data_size": 63488 00:13:32.457 }, 00:13:32.457 { 00:13:32.457 "name": "BaseBdev2", 00:13:32.457 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:32.457 "is_configured": true, 00:13:32.457 "data_offset": 2048, 00:13:32.457 "data_size": 63488 00:13:32.457 } 00:13:32.457 ] 00:13:32.457 }' 00:13:32.457 15:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.458 15:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:32.458 15:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.458 15:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:32.458 15:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:32.458 15:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.458 15:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.458 [2024-11-25 15:40:31.100981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:32.718 15:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.718 15:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:32.718 [2024-11-25 15:40:31.167875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:32.718 [2024-11-25 15:40:31.169879] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:32.718 [2024-11-25 15:40:31.276510] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:32.718 [2024-11-25 15:40:31.276895] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:32.977 [2024-11-25 15:40:31.402801] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:32.977 [2024-11-25 15:40:31.403213] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:33.236 [2024-11-25 15:40:31.725315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:33.236 163.00 IOPS, 489.00 MiB/s [2024-11-25T15:40:31.917Z] [2024-11-25 15:40:31.845096] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:33.236 [2024-11-25 15:40:31.845388] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:33.496 [2024-11-25 15:40:32.092140] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:33.496 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.496 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.496 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.496 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.496 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.496 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.496 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.496 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.496 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.496 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.755 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.755 "name": "raid_bdev1", 00:13:33.755 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:33.755 "strip_size_kb": 0, 00:13:33.755 "state": "online", 00:13:33.755 "raid_level": "raid1", 00:13:33.755 "superblock": true, 00:13:33.755 "num_base_bdevs": 2, 00:13:33.755 "num_base_bdevs_discovered": 2, 00:13:33.755 "num_base_bdevs_operational": 2, 00:13:33.755 "process": { 00:13:33.755 "type": "rebuild", 00:13:33.755 "target": "spare", 00:13:33.755 "progress": { 00:13:33.755 "blocks": 14336, 00:13:33.755 "percent": 22 00:13:33.755 } 00:13:33.755 }, 00:13:33.755 "base_bdevs_list": [ 00:13:33.755 { 00:13:33.755 "name": "spare", 00:13:33.755 "uuid": "bb0007c2-9768-565a-8dd5-4904f4deb68c", 00:13:33.755 "is_configured": true, 00:13:33.755 "data_offset": 2048, 00:13:33.755 "data_size": 63488 00:13:33.755 }, 00:13:33.755 { 00:13:33.755 "name": "BaseBdev2", 00:13:33.755 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:33.755 "is_configured": true, 00:13:33.755 "data_offset": 2048, 00:13:33.755 "data_size": 63488 00:13:33.755 } 00:13:33.755 ] 00:13:33.755 }' 00:13:33.755 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.755 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.755 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.755 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.755 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:33.755 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:33.755 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:33.755 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=405 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.756 [2024-11-25 15:40:32.299503] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.756 "name": "raid_bdev1", 00:13:33.756 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:33.756 "strip_size_kb": 0, 00:13:33.756 "state": "online", 00:13:33.756 "raid_level": "raid1", 00:13:33.756 "superblock": true, 00:13:33.756 "num_base_bdevs": 2, 00:13:33.756 "num_base_bdevs_discovered": 2, 00:13:33.756 "num_base_bdevs_operational": 2, 00:13:33.756 "process": { 00:13:33.756 "type": "rebuild", 00:13:33.756 "target": "spare", 00:13:33.756 "progress": { 00:13:33.756 "blocks": 16384, 00:13:33.756 "percent": 25 00:13:33.756 } 00:13:33.756 }, 00:13:33.756 "base_bdevs_list": [ 00:13:33.756 { 00:13:33.756 "name": "spare", 00:13:33.756 "uuid": "bb0007c2-9768-565a-8dd5-4904f4deb68c", 00:13:33.756 "is_configured": true, 00:13:33.756 "data_offset": 2048, 00:13:33.756 "data_size": 63488 00:13:33.756 }, 00:13:33.756 { 00:13:33.756 "name": "BaseBdev2", 00:13:33.756 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:33.756 "is_configured": true, 00:13:33.756 "data_offset": 2048, 00:13:33.756 "data_size": 63488 00:13:33.756 } 00:13:33.756 ] 00:13:33.756 }' 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.756 15:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:34.325 [2024-11-25 15:40:32.716943] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:34.585 139.75 IOPS, 419.25 MiB/s [2024-11-25T15:40:33.266Z] [2024-11-25 15:40:33.051936] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:34.845 [2024-11-25 15:40:33.267593] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:34.845 15:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:34.845 15:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.845 15:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.845 15:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.845 15:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.845 15:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.845 15:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.845 15:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.845 15:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.845 15:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.845 15:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.845 15:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.845 "name": "raid_bdev1", 00:13:34.845 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:34.845 "strip_size_kb": 0, 00:13:34.845 "state": "online", 00:13:34.845 "raid_level": "raid1", 00:13:34.845 "superblock": true, 00:13:34.845 "num_base_bdevs": 2, 00:13:34.845 "num_base_bdevs_discovered": 2, 00:13:34.845 "num_base_bdevs_operational": 2, 00:13:34.845 "process": { 00:13:34.845 "type": "rebuild", 00:13:34.845 "target": "spare", 00:13:34.845 "progress": { 00:13:34.845 "blocks": 30720, 00:13:34.845 "percent": 48 00:13:34.845 } 00:13:34.845 }, 00:13:34.845 "base_bdevs_list": [ 00:13:34.845 { 00:13:34.845 "name": "spare", 00:13:34.845 "uuid": "bb0007c2-9768-565a-8dd5-4904f4deb68c", 00:13:34.845 "is_configured": true, 00:13:34.845 "data_offset": 2048, 00:13:34.845 "data_size": 63488 00:13:34.845 }, 00:13:34.845 { 00:13:34.845 "name": "BaseBdev2", 00:13:34.845 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:34.845 "is_configured": true, 00:13:34.845 "data_offset": 2048, 00:13:34.845 "data_size": 63488 00:13:34.845 } 00:13:34.845 ] 00:13:34.845 }' 00:13:34.845 15:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.845 15:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.845 15:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.104 15:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.104 15:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:35.364 120.60 IOPS, 361.80 MiB/s [2024-11-25T15:40:34.045Z] [2024-11-25 15:40:33.858540] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:35.364 [2024-11-25 15:40:33.973251] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:35.936 [2024-11-25 15:40:34.307973] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:35.936 [2024-11-25 15:40:34.420837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:35.936 15:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:35.936 15:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.936 15:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.936 15:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.936 15:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.936 15:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.936 15:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.936 15:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.936 15:40:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.936 15:40:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.936 15:40:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.936 15:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.936 "name": "raid_bdev1", 00:13:35.936 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:35.936 "strip_size_kb": 0, 00:13:35.936 "state": "online", 00:13:35.936 "raid_level": "raid1", 00:13:35.936 "superblock": true, 00:13:35.936 "num_base_bdevs": 2, 00:13:35.936 "num_base_bdevs_discovered": 2, 00:13:35.936 "num_base_bdevs_operational": 2, 00:13:35.936 "process": { 00:13:35.936 "type": "rebuild", 00:13:35.936 "target": "spare", 00:13:35.936 "progress": { 00:13:35.936 "blocks": 47104, 00:13:35.936 "percent": 74 00:13:35.936 } 00:13:35.936 }, 00:13:35.936 "base_bdevs_list": [ 00:13:35.936 { 00:13:35.936 "name": "spare", 00:13:35.936 "uuid": "bb0007c2-9768-565a-8dd5-4904f4deb68c", 00:13:35.936 "is_configured": true, 00:13:35.936 "data_offset": 2048, 00:13:35.936 "data_size": 63488 00:13:35.936 }, 00:13:35.936 { 00:13:35.936 "name": "BaseBdev2", 00:13:35.936 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:35.936 "is_configured": true, 00:13:35.936 "data_offset": 2048, 00:13:35.936 "data_size": 63488 00:13:35.936 } 00:13:35.936 ] 00:13:35.936 }' 00:13:35.936 15:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.196 15:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.196 15:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.196 15:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.196 15:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:36.196 108.50 IOPS, 325.50 MiB/s [2024-11-25T15:40:34.877Z] [2024-11-25 15:40:34.843628] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:36.457 [2024-11-25 15:40:35.061723] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:37.027 [2024-11-25 15:40:35.486359] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:37.027 [2024-11-25 15:40:35.591733] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:37.027 [2024-11-25 15:40:35.594031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.287 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.287 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.287 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.287 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.287 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.288 "name": "raid_bdev1", 00:13:37.288 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:37.288 "strip_size_kb": 0, 00:13:37.288 "state": "online", 00:13:37.288 "raid_level": "raid1", 00:13:37.288 "superblock": true, 00:13:37.288 "num_base_bdevs": 2, 00:13:37.288 "num_base_bdevs_discovered": 2, 00:13:37.288 "num_base_bdevs_operational": 2, 00:13:37.288 "base_bdevs_list": [ 00:13:37.288 { 00:13:37.288 "name": "spare", 00:13:37.288 "uuid": "bb0007c2-9768-565a-8dd5-4904f4deb68c", 00:13:37.288 "is_configured": true, 00:13:37.288 "data_offset": 2048, 00:13:37.288 "data_size": 63488 00:13:37.288 }, 00:13:37.288 { 00:13:37.288 "name": "BaseBdev2", 00:13:37.288 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:37.288 "is_configured": true, 00:13:37.288 "data_offset": 2048, 00:13:37.288 "data_size": 63488 00:13:37.288 } 00:13:37.288 ] 00:13:37.288 }' 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.288 97.86 IOPS, 293.57 MiB/s [2024-11-25T15:40:35.969Z] 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.288 "name": "raid_bdev1", 00:13:37.288 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:37.288 "strip_size_kb": 0, 00:13:37.288 "state": "online", 00:13:37.288 "raid_level": "raid1", 00:13:37.288 "superblock": true, 00:13:37.288 "num_base_bdevs": 2, 00:13:37.288 "num_base_bdevs_discovered": 2, 00:13:37.288 "num_base_bdevs_operational": 2, 00:13:37.288 "base_bdevs_list": [ 00:13:37.288 { 00:13:37.288 "name": "spare", 00:13:37.288 "uuid": "bb0007c2-9768-565a-8dd5-4904f4deb68c", 00:13:37.288 "is_configured": true, 00:13:37.288 "data_offset": 2048, 00:13:37.288 "data_size": 63488 00:13:37.288 }, 00:13:37.288 { 00:13:37.288 "name": "BaseBdev2", 00:13:37.288 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:37.288 "is_configured": true, 00:13:37.288 "data_offset": 2048, 00:13:37.288 "data_size": 63488 00:13:37.288 } 00:13:37.288 ] 00:13:37.288 }' 00:13:37.288 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.549 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.549 15:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.549 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.549 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:37.549 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.549 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.549 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.549 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.549 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:37.549 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.549 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.549 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.549 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.549 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.549 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.549 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.549 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.549 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.549 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.549 "name": "raid_bdev1", 00:13:37.549 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:37.549 "strip_size_kb": 0, 00:13:37.549 "state": "online", 00:13:37.549 "raid_level": "raid1", 00:13:37.549 "superblock": true, 00:13:37.549 "num_base_bdevs": 2, 00:13:37.549 "num_base_bdevs_discovered": 2, 00:13:37.549 "num_base_bdevs_operational": 2, 00:13:37.549 "base_bdevs_list": [ 00:13:37.549 { 00:13:37.549 "name": "spare", 00:13:37.549 "uuid": "bb0007c2-9768-565a-8dd5-4904f4deb68c", 00:13:37.549 "is_configured": true, 00:13:37.549 "data_offset": 2048, 00:13:37.549 "data_size": 63488 00:13:37.549 }, 00:13:37.549 { 00:13:37.549 "name": "BaseBdev2", 00:13:37.549 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:37.549 "is_configured": true, 00:13:37.549 "data_offset": 2048, 00:13:37.549 "data_size": 63488 00:13:37.549 } 00:13:37.549 ] 00:13:37.549 }' 00:13:37.549 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.549 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.809 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:37.809 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.809 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.809 [2024-11-25 15:40:36.467323] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:37.809 [2024-11-25 15:40:36.467412] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:38.070 00:13:38.070 Latency(us) 00:13:38.070 [2024-11-25T15:40:36.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.070 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:38.070 raid_bdev1 : 7.74 91.38 274.14 0.00 0.00 14425.42 314.80 113557.58 00:13:38.070 [2024-11-25T15:40:36.751Z] =================================================================================================================== 00:13:38.070 [2024-11-25T15:40:36.751Z] Total : 91.38 274.14 0.00 0.00 14425.42 314.80 113557.58 00:13:38.070 [2024-11-25 15:40:36.572571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.070 [2024-11-25 15:40:36.572661] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.070 [2024-11-25 15:40:36.572762] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.070 [2024-11-25 15:40:36.572823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:38.070 { 00:13:38.070 "results": [ 00:13:38.070 { 00:13:38.070 "job": "raid_bdev1", 00:13:38.070 "core_mask": "0x1", 00:13:38.070 "workload": "randrw", 00:13:38.070 "percentage": 50, 00:13:38.070 "status": "finished", 00:13:38.070 "queue_depth": 2, 00:13:38.070 "io_size": 3145728, 00:13:38.070 "runtime": 7.73681, 00:13:38.070 "iops": 91.38133158239636, 00:13:38.070 "mibps": 274.1439947471891, 00:13:38.070 "io_failed": 0, 00:13:38.070 "io_timeout": 0, 00:13:38.070 "avg_latency_us": 14425.423459725886, 00:13:38.070 "min_latency_us": 314.80174672489085, 00:13:38.070 "max_latency_us": 113557.57554585153 00:13:38.070 } 00:13:38.070 ], 00:13:38.070 "core_count": 1 00:13:38.070 } 00:13:38.070 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.070 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.070 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:38.070 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.070 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.070 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.070 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:38.070 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:38.070 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:38.070 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:38.070 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:38.070 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:38.070 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:38.070 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:38.070 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:38.070 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:38.070 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:38.070 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:38.070 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:38.330 /dev/nbd0 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.330 1+0 records in 00:13:38.330 1+0 records out 00:13:38.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290183 s, 14.1 MB/s 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:38.330 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:38.331 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:38.331 15:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:38.589 /dev/nbd1 00:13:38.589 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:38.589 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:38.589 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:38.589 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:38.589 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:38.589 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:38.589 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:38.589 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:38.589 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:38.589 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:38.589 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.589 1+0 records in 00:13:38.589 1+0 records out 00:13:38.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262124 s, 15.6 MB/s 00:13:38.589 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.589 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:38.589 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.589 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:38.589 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:38.589 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.589 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:38.589 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.849 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.108 [2024-11-25 15:40:37.763234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:39.108 [2024-11-25 15:40:37.763797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.108 [2024-11-25 15:40:37.763935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:39.108 [2024-11-25 15:40:37.764044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.108 [2024-11-25 15:40:37.766882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.108 [2024-11-25 15:40:37.767096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:39.108 [2024-11-25 15:40:37.767308] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:39.108 [2024-11-25 15:40:37.767435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:39.108 [2024-11-25 15:40:37.767677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:39.108 spare 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.108 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.393 [2024-11-25 15:40:37.867641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:39.393 [2024-11-25 15:40:37.867720] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:39.393 [2024-11-25 15:40:37.868169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:39.393 [2024-11-25 15:40:37.868441] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:39.393 [2024-11-25 15:40:37.868459] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:39.394 [2024-11-25 15:40:37.868696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.394 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.394 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:39.394 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.394 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.394 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.394 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.394 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.394 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.394 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.394 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.394 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.394 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.394 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.394 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.394 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.394 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.394 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.394 "name": "raid_bdev1", 00:13:39.394 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:39.394 "strip_size_kb": 0, 00:13:39.394 "state": "online", 00:13:39.394 "raid_level": "raid1", 00:13:39.394 "superblock": true, 00:13:39.394 "num_base_bdevs": 2, 00:13:39.394 "num_base_bdevs_discovered": 2, 00:13:39.394 "num_base_bdevs_operational": 2, 00:13:39.394 "base_bdevs_list": [ 00:13:39.394 { 00:13:39.394 "name": "spare", 00:13:39.394 "uuid": "bb0007c2-9768-565a-8dd5-4904f4deb68c", 00:13:39.394 "is_configured": true, 00:13:39.394 "data_offset": 2048, 00:13:39.394 "data_size": 63488 00:13:39.394 }, 00:13:39.394 { 00:13:39.394 "name": "BaseBdev2", 00:13:39.394 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:39.394 "is_configured": true, 00:13:39.394 "data_offset": 2048, 00:13:39.394 "data_size": 63488 00:13:39.394 } 00:13:39.394 ] 00:13:39.394 }' 00:13:39.394 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.394 15:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.653 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.653 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.653 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.653 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.653 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.653 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.653 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.653 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.653 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.653 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.912 "name": "raid_bdev1", 00:13:39.912 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:39.912 "strip_size_kb": 0, 00:13:39.912 "state": "online", 00:13:39.912 "raid_level": "raid1", 00:13:39.912 "superblock": true, 00:13:39.912 "num_base_bdevs": 2, 00:13:39.912 "num_base_bdevs_discovered": 2, 00:13:39.912 "num_base_bdevs_operational": 2, 00:13:39.912 "base_bdevs_list": [ 00:13:39.912 { 00:13:39.912 "name": "spare", 00:13:39.912 "uuid": "bb0007c2-9768-565a-8dd5-4904f4deb68c", 00:13:39.912 "is_configured": true, 00:13:39.912 "data_offset": 2048, 00:13:39.912 "data_size": 63488 00:13:39.912 }, 00:13:39.912 { 00:13:39.912 "name": "BaseBdev2", 00:13:39.912 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:39.912 "is_configured": true, 00:13:39.912 "data_offset": 2048, 00:13:39.912 "data_size": 63488 00:13:39.912 } 00:13:39.912 ] 00:13:39.912 }' 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.912 [2024-11-25 15:40:38.514301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.912 "name": "raid_bdev1", 00:13:39.912 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:39.912 "strip_size_kb": 0, 00:13:39.912 "state": "online", 00:13:39.912 "raid_level": "raid1", 00:13:39.912 "superblock": true, 00:13:39.912 "num_base_bdevs": 2, 00:13:39.912 "num_base_bdevs_discovered": 1, 00:13:39.912 "num_base_bdevs_operational": 1, 00:13:39.912 "base_bdevs_list": [ 00:13:39.912 { 00:13:39.912 "name": null, 00:13:39.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.912 "is_configured": false, 00:13:39.912 "data_offset": 0, 00:13:39.912 "data_size": 63488 00:13:39.912 }, 00:13:39.912 { 00:13:39.912 "name": "BaseBdev2", 00:13:39.912 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:39.912 "is_configured": true, 00:13:39.912 "data_offset": 2048, 00:13:39.912 "data_size": 63488 00:13:39.912 } 00:13:39.912 ] 00:13:39.912 }' 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.912 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.481 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:40.481 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.481 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.481 [2024-11-25 15:40:38.969607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:40.481 [2024-11-25 15:40:38.969818] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:40.481 [2024-11-25 15:40:38.969835] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:40.481 [2024-11-25 15:40:38.970345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:40.481 [2024-11-25 15:40:38.986081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:40.481 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.481 15:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:40.481 [2024-11-25 15:40:38.987925] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:41.420 15:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.420 15:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.420 15:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.420 15:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.420 15:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.420 15:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.420 15:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.420 15:40:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.420 15:40:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.420 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.420 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.420 "name": "raid_bdev1", 00:13:41.420 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:41.420 "strip_size_kb": 0, 00:13:41.420 "state": "online", 00:13:41.421 "raid_level": "raid1", 00:13:41.421 "superblock": true, 00:13:41.421 "num_base_bdevs": 2, 00:13:41.421 "num_base_bdevs_discovered": 2, 00:13:41.421 "num_base_bdevs_operational": 2, 00:13:41.421 "process": { 00:13:41.421 "type": "rebuild", 00:13:41.421 "target": "spare", 00:13:41.421 "progress": { 00:13:41.421 "blocks": 20480, 00:13:41.421 "percent": 32 00:13:41.421 } 00:13:41.421 }, 00:13:41.421 "base_bdevs_list": [ 00:13:41.421 { 00:13:41.421 "name": "spare", 00:13:41.421 "uuid": "bb0007c2-9768-565a-8dd5-4904f4deb68c", 00:13:41.421 "is_configured": true, 00:13:41.421 "data_offset": 2048, 00:13:41.421 "data_size": 63488 00:13:41.421 }, 00:13:41.421 { 00:13:41.421 "name": "BaseBdev2", 00:13:41.421 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:41.421 "is_configured": true, 00:13:41.421 "data_offset": 2048, 00:13:41.421 "data_size": 63488 00:13:41.421 } 00:13:41.421 ] 00:13:41.421 }' 00:13:41.421 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.421 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.421 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.681 [2024-11-25 15:40:40.123528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.681 [2024-11-25 15:40:40.193789] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:41.681 [2024-11-25 15:40:40.194190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.681 [2024-11-25 15:40:40.194211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.681 [2024-11-25 15:40:40.194225] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.681 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.681 "name": "raid_bdev1", 00:13:41.682 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:41.682 "strip_size_kb": 0, 00:13:41.682 "state": "online", 00:13:41.682 "raid_level": "raid1", 00:13:41.682 "superblock": true, 00:13:41.682 "num_base_bdevs": 2, 00:13:41.682 "num_base_bdevs_discovered": 1, 00:13:41.682 "num_base_bdevs_operational": 1, 00:13:41.682 "base_bdevs_list": [ 00:13:41.682 { 00:13:41.682 "name": null, 00:13:41.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.682 "is_configured": false, 00:13:41.682 "data_offset": 0, 00:13:41.682 "data_size": 63488 00:13:41.682 }, 00:13:41.682 { 00:13:41.682 "name": "BaseBdev2", 00:13:41.682 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:41.682 "is_configured": true, 00:13:41.682 "data_offset": 2048, 00:13:41.682 "data_size": 63488 00:13:41.682 } 00:13:41.682 ] 00:13:41.682 }' 00:13:41.682 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.682 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.254 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:42.254 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.254 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.254 [2024-11-25 15:40:40.691073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:42.254 [2024-11-25 15:40:40.691369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.254 [2024-11-25 15:40:40.691455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:42.254 [2024-11-25 15:40:40.691536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.254 [2024-11-25 15:40:40.692168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.254 [2024-11-25 15:40:40.692271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:42.254 [2024-11-25 15:40:40.692461] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:42.254 [2024-11-25 15:40:40.692483] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:42.254 [2024-11-25 15:40:40.692493] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:42.254 [2024-11-25 15:40:40.692571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.254 [2024-11-25 15:40:40.708926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:42.254 spare 00:13:42.254 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.254 15:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:42.254 [2024-11-25 15:40:40.710812] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:43.196 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.196 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.196 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.196 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.196 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.196 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.196 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.196 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.196 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.196 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.196 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.196 "name": "raid_bdev1", 00:13:43.196 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:43.196 "strip_size_kb": 0, 00:13:43.196 "state": "online", 00:13:43.196 "raid_level": "raid1", 00:13:43.196 "superblock": true, 00:13:43.196 "num_base_bdevs": 2, 00:13:43.196 "num_base_bdevs_discovered": 2, 00:13:43.196 "num_base_bdevs_operational": 2, 00:13:43.196 "process": { 00:13:43.196 "type": "rebuild", 00:13:43.196 "target": "spare", 00:13:43.196 "progress": { 00:13:43.196 "blocks": 20480, 00:13:43.196 "percent": 32 00:13:43.196 } 00:13:43.196 }, 00:13:43.196 "base_bdevs_list": [ 00:13:43.196 { 00:13:43.196 "name": "spare", 00:13:43.196 "uuid": "bb0007c2-9768-565a-8dd5-4904f4deb68c", 00:13:43.196 "is_configured": true, 00:13:43.196 "data_offset": 2048, 00:13:43.196 "data_size": 63488 00:13:43.196 }, 00:13:43.196 { 00:13:43.196 "name": "BaseBdev2", 00:13:43.196 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:43.196 "is_configured": true, 00:13:43.196 "data_offset": 2048, 00:13:43.196 "data_size": 63488 00:13:43.196 } 00:13:43.196 ] 00:13:43.196 }' 00:13:43.196 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.196 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.196 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.196 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.196 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:43.196 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.196 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.196 [2024-11-25 15:40:41.863113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.457 [2024-11-25 15:40:41.915950] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:43.457 [2024-11-25 15:40:41.916515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.457 [2024-11-25 15:40:41.916583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.457 [2024-11-25 15:40:41.916616] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:43.457 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.457 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:43.457 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.457 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.457 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.457 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.457 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:43.457 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.457 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.457 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.457 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.457 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.457 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.457 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.457 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.457 15:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.457 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.457 "name": "raid_bdev1", 00:13:43.457 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:43.457 "strip_size_kb": 0, 00:13:43.457 "state": "online", 00:13:43.457 "raid_level": "raid1", 00:13:43.457 "superblock": true, 00:13:43.457 "num_base_bdevs": 2, 00:13:43.457 "num_base_bdevs_discovered": 1, 00:13:43.457 "num_base_bdevs_operational": 1, 00:13:43.457 "base_bdevs_list": [ 00:13:43.457 { 00:13:43.457 "name": null, 00:13:43.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.457 "is_configured": false, 00:13:43.457 "data_offset": 0, 00:13:43.457 "data_size": 63488 00:13:43.457 }, 00:13:43.457 { 00:13:43.457 "name": "BaseBdev2", 00:13:43.457 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:43.457 "is_configured": true, 00:13:43.457 "data_offset": 2048, 00:13:43.457 "data_size": 63488 00:13:43.457 } 00:13:43.457 ] 00:13:43.457 }' 00:13:43.457 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.457 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.029 "name": "raid_bdev1", 00:13:44.029 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:44.029 "strip_size_kb": 0, 00:13:44.029 "state": "online", 00:13:44.029 "raid_level": "raid1", 00:13:44.029 "superblock": true, 00:13:44.029 "num_base_bdevs": 2, 00:13:44.029 "num_base_bdevs_discovered": 1, 00:13:44.029 "num_base_bdevs_operational": 1, 00:13:44.029 "base_bdevs_list": [ 00:13:44.029 { 00:13:44.029 "name": null, 00:13:44.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.029 "is_configured": false, 00:13:44.029 "data_offset": 0, 00:13:44.029 "data_size": 63488 00:13:44.029 }, 00:13:44.029 { 00:13:44.029 "name": "BaseBdev2", 00:13:44.029 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:44.029 "is_configured": true, 00:13:44.029 "data_offset": 2048, 00:13:44.029 "data_size": 63488 00:13:44.029 } 00:13:44.029 ] 00:13:44.029 }' 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.029 [2024-11-25 15:40:42.551989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:44.029 [2024-11-25 15:40:42.552237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.029 [2024-11-25 15:40:42.552328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:44.029 [2024-11-25 15:40:42.552401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.029 [2024-11-25 15:40:42.552879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.029 [2024-11-25 15:40:42.553015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:44.029 [2024-11-25 15:40:42.553197] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:44.029 [2024-11-25 15:40:42.553243] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:44.029 [2024-11-25 15:40:42.553285] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:44.029 [2024-11-25 15:40:42.553337] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:44.029 BaseBdev1 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.029 15:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:44.970 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:44.970 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.970 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.970 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.970 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.970 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:44.970 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.970 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.970 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.970 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.970 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.970 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.970 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.970 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.970 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.970 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.970 "name": "raid_bdev1", 00:13:44.970 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:44.970 "strip_size_kb": 0, 00:13:44.970 "state": "online", 00:13:44.970 "raid_level": "raid1", 00:13:44.970 "superblock": true, 00:13:44.970 "num_base_bdevs": 2, 00:13:44.970 "num_base_bdevs_discovered": 1, 00:13:44.970 "num_base_bdevs_operational": 1, 00:13:44.970 "base_bdevs_list": [ 00:13:44.970 { 00:13:44.970 "name": null, 00:13:44.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.970 "is_configured": false, 00:13:44.970 "data_offset": 0, 00:13:44.970 "data_size": 63488 00:13:44.970 }, 00:13:44.970 { 00:13:44.970 "name": "BaseBdev2", 00:13:44.970 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:44.970 "is_configured": true, 00:13:44.970 "data_offset": 2048, 00:13:44.970 "data_size": 63488 00:13:44.970 } 00:13:44.970 ] 00:13:44.970 }' 00:13:44.970 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.970 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.541 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:45.541 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.541 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:45.541 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:45.541 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.541 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.541 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.541 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.541 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.541 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.541 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.541 "name": "raid_bdev1", 00:13:45.541 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:45.541 "strip_size_kb": 0, 00:13:45.541 "state": "online", 00:13:45.541 "raid_level": "raid1", 00:13:45.541 "superblock": true, 00:13:45.541 "num_base_bdevs": 2, 00:13:45.541 "num_base_bdevs_discovered": 1, 00:13:45.541 "num_base_bdevs_operational": 1, 00:13:45.541 "base_bdevs_list": [ 00:13:45.541 { 00:13:45.541 "name": null, 00:13:45.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.541 "is_configured": false, 00:13:45.541 "data_offset": 0, 00:13:45.541 "data_size": 63488 00:13:45.541 }, 00:13:45.541 { 00:13:45.541 "name": "BaseBdev2", 00:13:45.541 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:45.541 "is_configured": true, 00:13:45.541 "data_offset": 2048, 00:13:45.541 "data_size": 63488 00:13:45.541 } 00:13:45.541 ] 00:13:45.541 }' 00:13:45.541 15:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.541 15:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:45.541 15:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.541 15:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:45.541 15:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:45.541 15:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:45.541 15:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:45.541 15:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:45.541 15:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.541 15:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:45.541 15:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.541 15:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:45.541 15:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.541 15:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.541 [2024-11-25 15:40:44.101577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:45.541 [2024-11-25 15:40:44.101732] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:45.541 [2024-11-25 15:40:44.101747] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:45.541 request: 00:13:45.541 { 00:13:45.541 "base_bdev": "BaseBdev1", 00:13:45.541 "raid_bdev": "raid_bdev1", 00:13:45.541 "method": "bdev_raid_add_base_bdev", 00:13:45.541 "req_id": 1 00:13:45.541 } 00:13:45.541 Got JSON-RPC error response 00:13:45.541 response: 00:13:45.541 { 00:13:45.541 "code": -22, 00:13:45.541 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:45.541 } 00:13:45.541 15:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:45.541 15:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:45.541 15:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:45.541 15:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:45.541 15:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:45.541 15:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:46.482 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:46.482 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.482 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.482 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.482 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.482 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:46.482 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.482 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.482 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.482 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.482 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.482 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.482 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.482 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.482 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.740 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.740 "name": "raid_bdev1", 00:13:46.740 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:46.740 "strip_size_kb": 0, 00:13:46.740 "state": "online", 00:13:46.740 "raid_level": "raid1", 00:13:46.740 "superblock": true, 00:13:46.740 "num_base_bdevs": 2, 00:13:46.740 "num_base_bdevs_discovered": 1, 00:13:46.740 "num_base_bdevs_operational": 1, 00:13:46.740 "base_bdevs_list": [ 00:13:46.740 { 00:13:46.740 "name": null, 00:13:46.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.741 "is_configured": false, 00:13:46.741 "data_offset": 0, 00:13:46.741 "data_size": 63488 00:13:46.741 }, 00:13:46.741 { 00:13:46.741 "name": "BaseBdev2", 00:13:46.741 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:46.741 "is_configured": true, 00:13:46.741 "data_offset": 2048, 00:13:46.741 "data_size": 63488 00:13:46.741 } 00:13:46.741 ] 00:13:46.741 }' 00:13:46.741 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.741 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.000 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:47.000 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.000 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:47.000 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:47.000 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.000 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.000 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.000 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.000 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.000 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.000 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.000 "name": "raid_bdev1", 00:13:47.000 "uuid": "51be14d2-4b37-4b55-b4af-54f60f03e20d", 00:13:47.000 "strip_size_kb": 0, 00:13:47.000 "state": "online", 00:13:47.000 "raid_level": "raid1", 00:13:47.000 "superblock": true, 00:13:47.000 "num_base_bdevs": 2, 00:13:47.000 "num_base_bdevs_discovered": 1, 00:13:47.000 "num_base_bdevs_operational": 1, 00:13:47.000 "base_bdevs_list": [ 00:13:47.000 { 00:13:47.000 "name": null, 00:13:47.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.000 "is_configured": false, 00:13:47.000 "data_offset": 0, 00:13:47.000 "data_size": 63488 00:13:47.000 }, 00:13:47.000 { 00:13:47.000 "name": "BaseBdev2", 00:13:47.000 "uuid": "57dd66db-d740-5332-b769-5b69fda0b452", 00:13:47.000 "is_configured": true, 00:13:47.000 "data_offset": 2048, 00:13:47.000 "data_size": 63488 00:13:47.000 } 00:13:47.000 ] 00:13:47.000 }' 00:13:47.000 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.269 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:47.269 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.269 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:47.269 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76525 00:13:47.269 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76525 ']' 00:13:47.269 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76525 00:13:47.269 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:47.269 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:47.269 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76525 00:13:47.269 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:47.269 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:47.269 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76525' 00:13:47.269 killing process with pid 76525 00:13:47.269 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76525 00:13:47.269 Received shutdown signal, test time was about 16.987256 seconds 00:13:47.269 00:13:47.269 Latency(us) 00:13:47.269 [2024-11-25T15:40:45.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.269 [2024-11-25T15:40:45.950Z] =================================================================================================================== 00:13:47.269 [2024-11-25T15:40:45.950Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:47.269 [2024-11-25 15:40:45.783750] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:47.269 [2024-11-25 15:40:45.783880] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.269 15:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76525 00:13:47.269 [2024-11-25 15:40:45.783937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.269 [2024-11-25 15:40:45.783949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:47.544 [2024-11-25 15:40:46.006364] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:48.485 15:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:48.485 00:13:48.485 real 0m20.059s 00:13:48.485 user 0m26.171s 00:13:48.485 sys 0m2.121s 00:13:48.485 ************************************ 00:13:48.485 END TEST raid_rebuild_test_sb_io 00:13:48.485 ************************************ 00:13:48.485 15:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.485 15:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.745 15:40:47 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:48.745 15:40:47 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:48.745 15:40:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:48.745 15:40:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.745 15:40:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:48.745 ************************************ 00:13:48.745 START TEST raid_rebuild_test 00:13:48.745 ************************************ 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77208 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77208 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77208 ']' 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:48.745 15:40:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.745 [2024-11-25 15:40:47.285715] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:13:48.745 [2024-11-25 15:40:47.286361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77208 ] 00:13:48.745 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:48.745 Zero copy mechanism will not be used. 00:13:49.004 [2024-11-25 15:40:47.459220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.004 [2024-11-25 15:40:47.571992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.264 [2024-11-25 15:40:47.766355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.264 [2024-11-25 15:40:47.766482] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.523 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.523 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:49.524 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:49.524 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:49.524 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.524 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.524 BaseBdev1_malloc 00:13:49.524 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.524 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:49.524 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.524 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.524 [2024-11-25 15:40:48.159199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:49.524 [2024-11-25 15:40:48.159453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.524 [2024-11-25 15:40:48.159488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:49.524 [2024-11-25 15:40:48.159500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.524 [2024-11-25 15:40:48.161545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.524 [2024-11-25 15:40:48.161585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:49.524 BaseBdev1 00:13:49.524 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.524 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:49.524 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:49.524 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.524 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.784 BaseBdev2_malloc 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.784 [2024-11-25 15:40:48.212074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:49.784 [2024-11-25 15:40:48.212131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.784 [2024-11-25 15:40:48.212148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:49.784 [2024-11-25 15:40:48.212159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.784 [2024-11-25 15:40:48.214135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.784 [2024-11-25 15:40:48.214173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:49.784 BaseBdev2 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.784 BaseBdev3_malloc 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.784 [2024-11-25 15:40:48.276824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:49.784 [2024-11-25 15:40:48.276919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.784 [2024-11-25 15:40:48.276959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:49.784 [2024-11-25 15:40:48.276970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.784 [2024-11-25 15:40:48.279021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.784 [2024-11-25 15:40:48.279067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:49.784 BaseBdev3 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.784 BaseBdev4_malloc 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.784 [2024-11-25 15:40:48.332189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:49.784 [2024-11-25 15:40:48.332240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.784 [2024-11-25 15:40:48.332257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:49.784 [2024-11-25 15:40:48.332267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.784 [2024-11-25 15:40:48.334362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.784 [2024-11-25 15:40:48.334452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:49.784 BaseBdev4 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.784 spare_malloc 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.784 spare_delay 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.784 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.785 [2024-11-25 15:40:48.397904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:49.785 [2024-11-25 15:40:48.397958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.785 [2024-11-25 15:40:48.397993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:49.785 [2024-11-25 15:40:48.398003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.785 [2024-11-25 15:40:48.400075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.785 [2024-11-25 15:40:48.400110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:49.785 spare 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.785 [2024-11-25 15:40:48.409927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:49.785 [2024-11-25 15:40:48.411717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:49.785 [2024-11-25 15:40:48.411782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:49.785 [2024-11-25 15:40:48.411831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:49.785 [2024-11-25 15:40:48.411914] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:49.785 [2024-11-25 15:40:48.411926] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:49.785 [2024-11-25 15:40:48.412169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:49.785 [2024-11-25 15:40:48.412323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:49.785 [2024-11-25 15:40:48.412335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:49.785 [2024-11-25 15:40:48.412475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.785 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.045 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.045 "name": "raid_bdev1", 00:13:50.045 "uuid": "cab51534-b534-495c-be12-987f7d8c6b60", 00:13:50.045 "strip_size_kb": 0, 00:13:50.045 "state": "online", 00:13:50.045 "raid_level": "raid1", 00:13:50.045 "superblock": false, 00:13:50.045 "num_base_bdevs": 4, 00:13:50.045 "num_base_bdevs_discovered": 4, 00:13:50.045 "num_base_bdevs_operational": 4, 00:13:50.045 "base_bdevs_list": [ 00:13:50.045 { 00:13:50.045 "name": "BaseBdev1", 00:13:50.045 "uuid": "a65452eb-85fa-5452-bb18-041b851c606a", 00:13:50.045 "is_configured": true, 00:13:50.045 "data_offset": 0, 00:13:50.045 "data_size": 65536 00:13:50.045 }, 00:13:50.045 { 00:13:50.045 "name": "BaseBdev2", 00:13:50.045 "uuid": "3ad33d9e-f624-5ea7-916c-56a683215ea6", 00:13:50.045 "is_configured": true, 00:13:50.045 "data_offset": 0, 00:13:50.045 "data_size": 65536 00:13:50.045 }, 00:13:50.045 { 00:13:50.045 "name": "BaseBdev3", 00:13:50.045 "uuid": "3ae2ad04-7550-5651-837d-b93f2f3f54b1", 00:13:50.045 "is_configured": true, 00:13:50.045 "data_offset": 0, 00:13:50.045 "data_size": 65536 00:13:50.045 }, 00:13:50.045 { 00:13:50.045 "name": "BaseBdev4", 00:13:50.045 "uuid": "bc5bca78-77c6-5535-9398-666312ff3daa", 00:13:50.045 "is_configured": true, 00:13:50.045 "data_offset": 0, 00:13:50.045 "data_size": 65536 00:13:50.045 } 00:13:50.045 ] 00:13:50.045 }' 00:13:50.045 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.045 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:50.305 [2024-11-25 15:40:48.881472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.305 15:40:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:50.564 [2024-11-25 15:40:49.156697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:50.564 /dev/nbd0 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.564 1+0 records in 00:13:50.564 1+0 records out 00:13:50.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354697 s, 11.5 MB/s 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:50.564 15:40:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:55.849 65536+0 records in 00:13:55.849 65536+0 records out 00:13:55.849 33554432 bytes (34 MB, 32 MiB) copied, 5.15415 s, 6.5 MB/s 00:13:55.849 15:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:55.849 15:40:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:55.849 15:40:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:55.849 15:40:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:55.849 15:40:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:55.849 15:40:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.849 15:40:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:56.137 [2024-11-25 15:40:54.574703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.137 [2024-11-25 15:40:54.612922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.137 "name": "raid_bdev1", 00:13:56.137 "uuid": "cab51534-b534-495c-be12-987f7d8c6b60", 00:13:56.137 "strip_size_kb": 0, 00:13:56.137 "state": "online", 00:13:56.137 "raid_level": "raid1", 00:13:56.137 "superblock": false, 00:13:56.137 "num_base_bdevs": 4, 00:13:56.137 "num_base_bdevs_discovered": 3, 00:13:56.137 "num_base_bdevs_operational": 3, 00:13:56.137 "base_bdevs_list": [ 00:13:56.137 { 00:13:56.137 "name": null, 00:13:56.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.137 "is_configured": false, 00:13:56.137 "data_offset": 0, 00:13:56.137 "data_size": 65536 00:13:56.137 }, 00:13:56.137 { 00:13:56.137 "name": "BaseBdev2", 00:13:56.137 "uuid": "3ad33d9e-f624-5ea7-916c-56a683215ea6", 00:13:56.137 "is_configured": true, 00:13:56.137 "data_offset": 0, 00:13:56.137 "data_size": 65536 00:13:56.137 }, 00:13:56.137 { 00:13:56.137 "name": "BaseBdev3", 00:13:56.137 "uuid": "3ae2ad04-7550-5651-837d-b93f2f3f54b1", 00:13:56.137 "is_configured": true, 00:13:56.137 "data_offset": 0, 00:13:56.137 "data_size": 65536 00:13:56.137 }, 00:13:56.137 { 00:13:56.137 "name": "BaseBdev4", 00:13:56.137 "uuid": "bc5bca78-77c6-5535-9398-666312ff3daa", 00:13:56.137 "is_configured": true, 00:13:56.137 "data_offset": 0, 00:13:56.137 "data_size": 65536 00:13:56.137 } 00:13:56.137 ] 00:13:56.137 }' 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.137 15:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.706 15:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:56.706 15:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.706 15:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.706 [2024-11-25 15:40:55.088108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:56.706 [2024-11-25 15:40:55.104341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:56.706 15:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.706 15:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:56.706 [2024-11-25 15:40:55.106165] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:57.646 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.646 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.646 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.646 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.646 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.646 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.646 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.646 15:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.646 15:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.646 15:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.646 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.646 "name": "raid_bdev1", 00:13:57.646 "uuid": "cab51534-b534-495c-be12-987f7d8c6b60", 00:13:57.646 "strip_size_kb": 0, 00:13:57.646 "state": "online", 00:13:57.646 "raid_level": "raid1", 00:13:57.646 "superblock": false, 00:13:57.646 "num_base_bdevs": 4, 00:13:57.646 "num_base_bdevs_discovered": 4, 00:13:57.646 "num_base_bdevs_operational": 4, 00:13:57.646 "process": { 00:13:57.646 "type": "rebuild", 00:13:57.646 "target": "spare", 00:13:57.646 "progress": { 00:13:57.646 "blocks": 20480, 00:13:57.646 "percent": 31 00:13:57.646 } 00:13:57.646 }, 00:13:57.646 "base_bdevs_list": [ 00:13:57.646 { 00:13:57.646 "name": "spare", 00:13:57.646 "uuid": "44476aee-3e10-5826-a9c1-399eb1ece04e", 00:13:57.646 "is_configured": true, 00:13:57.646 "data_offset": 0, 00:13:57.646 "data_size": 65536 00:13:57.646 }, 00:13:57.646 { 00:13:57.646 "name": "BaseBdev2", 00:13:57.646 "uuid": "3ad33d9e-f624-5ea7-916c-56a683215ea6", 00:13:57.646 "is_configured": true, 00:13:57.646 "data_offset": 0, 00:13:57.646 "data_size": 65536 00:13:57.646 }, 00:13:57.646 { 00:13:57.646 "name": "BaseBdev3", 00:13:57.646 "uuid": "3ae2ad04-7550-5651-837d-b93f2f3f54b1", 00:13:57.646 "is_configured": true, 00:13:57.646 "data_offset": 0, 00:13:57.646 "data_size": 65536 00:13:57.646 }, 00:13:57.646 { 00:13:57.646 "name": "BaseBdev4", 00:13:57.646 "uuid": "bc5bca78-77c6-5535-9398-666312ff3daa", 00:13:57.646 "is_configured": true, 00:13:57.646 "data_offset": 0, 00:13:57.646 "data_size": 65536 00:13:57.646 } 00:13:57.646 ] 00:13:57.646 }' 00:13:57.646 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.646 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.646 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.646 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.646 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:57.646 15:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.646 15:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.646 [2024-11-25 15:40:56.249291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:57.646 [2024-11-25 15:40:56.310681] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:57.646 [2024-11-25 15:40:56.310756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.646 [2024-11-25 15:40:56.310771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:57.646 [2024-11-25 15:40:56.310781] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:57.906 15:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.906 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:57.906 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.906 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.906 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.906 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.906 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.906 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.906 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.906 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.906 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.906 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.906 15:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.906 15:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.906 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.906 15:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.906 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.906 "name": "raid_bdev1", 00:13:57.906 "uuid": "cab51534-b534-495c-be12-987f7d8c6b60", 00:13:57.906 "strip_size_kb": 0, 00:13:57.906 "state": "online", 00:13:57.906 "raid_level": "raid1", 00:13:57.906 "superblock": false, 00:13:57.906 "num_base_bdevs": 4, 00:13:57.906 "num_base_bdevs_discovered": 3, 00:13:57.906 "num_base_bdevs_operational": 3, 00:13:57.906 "base_bdevs_list": [ 00:13:57.906 { 00:13:57.906 "name": null, 00:13:57.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.906 "is_configured": false, 00:13:57.906 "data_offset": 0, 00:13:57.906 "data_size": 65536 00:13:57.906 }, 00:13:57.906 { 00:13:57.906 "name": "BaseBdev2", 00:13:57.906 "uuid": "3ad33d9e-f624-5ea7-916c-56a683215ea6", 00:13:57.906 "is_configured": true, 00:13:57.906 "data_offset": 0, 00:13:57.906 "data_size": 65536 00:13:57.906 }, 00:13:57.906 { 00:13:57.906 "name": "BaseBdev3", 00:13:57.906 "uuid": "3ae2ad04-7550-5651-837d-b93f2f3f54b1", 00:13:57.906 "is_configured": true, 00:13:57.906 "data_offset": 0, 00:13:57.906 "data_size": 65536 00:13:57.906 }, 00:13:57.906 { 00:13:57.906 "name": "BaseBdev4", 00:13:57.906 "uuid": "bc5bca78-77c6-5535-9398-666312ff3daa", 00:13:57.906 "is_configured": true, 00:13:57.906 "data_offset": 0, 00:13:57.906 "data_size": 65536 00:13:57.906 } 00:13:57.906 ] 00:13:57.906 }' 00:13:57.906 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.906 15:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.166 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.166 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.166 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.166 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.166 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.166 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.166 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.166 15:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.166 15:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.166 15:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.166 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.166 "name": "raid_bdev1", 00:13:58.166 "uuid": "cab51534-b534-495c-be12-987f7d8c6b60", 00:13:58.166 "strip_size_kb": 0, 00:13:58.166 "state": "online", 00:13:58.166 "raid_level": "raid1", 00:13:58.166 "superblock": false, 00:13:58.166 "num_base_bdevs": 4, 00:13:58.166 "num_base_bdevs_discovered": 3, 00:13:58.166 "num_base_bdevs_operational": 3, 00:13:58.166 "base_bdevs_list": [ 00:13:58.166 { 00:13:58.166 "name": null, 00:13:58.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.166 "is_configured": false, 00:13:58.166 "data_offset": 0, 00:13:58.166 "data_size": 65536 00:13:58.166 }, 00:13:58.166 { 00:13:58.166 "name": "BaseBdev2", 00:13:58.166 "uuid": "3ad33d9e-f624-5ea7-916c-56a683215ea6", 00:13:58.166 "is_configured": true, 00:13:58.166 "data_offset": 0, 00:13:58.166 "data_size": 65536 00:13:58.166 }, 00:13:58.166 { 00:13:58.166 "name": "BaseBdev3", 00:13:58.166 "uuid": "3ae2ad04-7550-5651-837d-b93f2f3f54b1", 00:13:58.166 "is_configured": true, 00:13:58.166 "data_offset": 0, 00:13:58.166 "data_size": 65536 00:13:58.166 }, 00:13:58.166 { 00:13:58.166 "name": "BaseBdev4", 00:13:58.166 "uuid": "bc5bca78-77c6-5535-9398-666312ff3daa", 00:13:58.166 "is_configured": true, 00:13:58.166 "data_offset": 0, 00:13:58.166 "data_size": 65536 00:13:58.166 } 00:13:58.166 ] 00:13:58.167 }' 00:13:58.167 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.167 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:58.167 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.426 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.426 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:58.426 15:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.426 15:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.426 [2024-11-25 15:40:56.874853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:58.426 [2024-11-25 15:40:56.889192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:58.426 15:40:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.426 15:40:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:58.426 [2024-11-25 15:40:56.891035] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:59.365 15:40:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.365 15:40:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.365 15:40:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.365 15:40:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.365 15:40:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.365 15:40:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.365 15:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.365 15:40:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.365 15:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.365 15:40:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.365 15:40:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.365 "name": "raid_bdev1", 00:13:59.365 "uuid": "cab51534-b534-495c-be12-987f7d8c6b60", 00:13:59.365 "strip_size_kb": 0, 00:13:59.365 "state": "online", 00:13:59.365 "raid_level": "raid1", 00:13:59.365 "superblock": false, 00:13:59.365 "num_base_bdevs": 4, 00:13:59.365 "num_base_bdevs_discovered": 4, 00:13:59.365 "num_base_bdevs_operational": 4, 00:13:59.365 "process": { 00:13:59.365 "type": "rebuild", 00:13:59.365 "target": "spare", 00:13:59.365 "progress": { 00:13:59.365 "blocks": 20480, 00:13:59.365 "percent": 31 00:13:59.365 } 00:13:59.365 }, 00:13:59.365 "base_bdevs_list": [ 00:13:59.365 { 00:13:59.365 "name": "spare", 00:13:59.365 "uuid": "44476aee-3e10-5826-a9c1-399eb1ece04e", 00:13:59.365 "is_configured": true, 00:13:59.365 "data_offset": 0, 00:13:59.365 "data_size": 65536 00:13:59.365 }, 00:13:59.365 { 00:13:59.365 "name": "BaseBdev2", 00:13:59.365 "uuid": "3ad33d9e-f624-5ea7-916c-56a683215ea6", 00:13:59.365 "is_configured": true, 00:13:59.365 "data_offset": 0, 00:13:59.365 "data_size": 65536 00:13:59.365 }, 00:13:59.365 { 00:13:59.365 "name": "BaseBdev3", 00:13:59.365 "uuid": "3ae2ad04-7550-5651-837d-b93f2f3f54b1", 00:13:59.365 "is_configured": true, 00:13:59.365 "data_offset": 0, 00:13:59.365 "data_size": 65536 00:13:59.365 }, 00:13:59.365 { 00:13:59.365 "name": "BaseBdev4", 00:13:59.365 "uuid": "bc5bca78-77c6-5535-9398-666312ff3daa", 00:13:59.365 "is_configured": true, 00:13:59.365 "data_offset": 0, 00:13:59.365 "data_size": 65536 00:13:59.365 } 00:13:59.365 ] 00:13:59.365 }' 00:13:59.365 15:40:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.365 15:40:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.365 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.625 [2024-11-25 15:40:58.054885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:59.625 [2024-11-25 15:40:58.095691] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.625 "name": "raid_bdev1", 00:13:59.625 "uuid": "cab51534-b534-495c-be12-987f7d8c6b60", 00:13:59.625 "strip_size_kb": 0, 00:13:59.625 "state": "online", 00:13:59.625 "raid_level": "raid1", 00:13:59.625 "superblock": false, 00:13:59.625 "num_base_bdevs": 4, 00:13:59.625 "num_base_bdevs_discovered": 3, 00:13:59.625 "num_base_bdevs_operational": 3, 00:13:59.625 "process": { 00:13:59.625 "type": "rebuild", 00:13:59.625 "target": "spare", 00:13:59.625 "progress": { 00:13:59.625 "blocks": 24576, 00:13:59.625 "percent": 37 00:13:59.625 } 00:13:59.625 }, 00:13:59.625 "base_bdevs_list": [ 00:13:59.625 { 00:13:59.625 "name": "spare", 00:13:59.625 "uuid": "44476aee-3e10-5826-a9c1-399eb1ece04e", 00:13:59.625 "is_configured": true, 00:13:59.625 "data_offset": 0, 00:13:59.625 "data_size": 65536 00:13:59.625 }, 00:13:59.625 { 00:13:59.625 "name": null, 00:13:59.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.625 "is_configured": false, 00:13:59.625 "data_offset": 0, 00:13:59.625 "data_size": 65536 00:13:59.625 }, 00:13:59.625 { 00:13:59.625 "name": "BaseBdev3", 00:13:59.625 "uuid": "3ae2ad04-7550-5651-837d-b93f2f3f54b1", 00:13:59.625 "is_configured": true, 00:13:59.625 "data_offset": 0, 00:13:59.625 "data_size": 65536 00:13:59.625 }, 00:13:59.625 { 00:13:59.625 "name": "BaseBdev4", 00:13:59.625 "uuid": "bc5bca78-77c6-5535-9398-666312ff3daa", 00:13:59.625 "is_configured": true, 00:13:59.625 "data_offset": 0, 00:13:59.625 "data_size": 65536 00:13:59.625 } 00:13:59.625 ] 00:13:59.625 }' 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=431 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.625 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.625 "name": "raid_bdev1", 00:13:59.625 "uuid": "cab51534-b534-495c-be12-987f7d8c6b60", 00:13:59.625 "strip_size_kb": 0, 00:13:59.625 "state": "online", 00:13:59.625 "raid_level": "raid1", 00:13:59.625 "superblock": false, 00:13:59.625 "num_base_bdevs": 4, 00:13:59.625 "num_base_bdevs_discovered": 3, 00:13:59.625 "num_base_bdevs_operational": 3, 00:13:59.625 "process": { 00:13:59.625 "type": "rebuild", 00:13:59.625 "target": "spare", 00:13:59.625 "progress": { 00:13:59.625 "blocks": 26624, 00:13:59.625 "percent": 40 00:13:59.625 } 00:13:59.625 }, 00:13:59.625 "base_bdevs_list": [ 00:13:59.625 { 00:13:59.625 "name": "spare", 00:13:59.625 "uuid": "44476aee-3e10-5826-a9c1-399eb1ece04e", 00:13:59.625 "is_configured": true, 00:13:59.625 "data_offset": 0, 00:13:59.625 "data_size": 65536 00:13:59.625 }, 00:13:59.625 { 00:13:59.625 "name": null, 00:13:59.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.625 "is_configured": false, 00:13:59.625 "data_offset": 0, 00:13:59.625 "data_size": 65536 00:13:59.625 }, 00:13:59.625 { 00:13:59.625 "name": "BaseBdev3", 00:13:59.625 "uuid": "3ae2ad04-7550-5651-837d-b93f2f3f54b1", 00:13:59.625 "is_configured": true, 00:13:59.625 "data_offset": 0, 00:13:59.625 "data_size": 65536 00:13:59.625 }, 00:13:59.625 { 00:13:59.625 "name": "BaseBdev4", 00:13:59.625 "uuid": "bc5bca78-77c6-5535-9398-666312ff3daa", 00:13:59.625 "is_configured": true, 00:13:59.626 "data_offset": 0, 00:13:59.626 "data_size": 65536 00:13:59.626 } 00:13:59.626 ] 00:13:59.626 }' 00:13:59.626 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.885 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.885 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.885 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.885 15:40:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:00.824 15:40:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:00.824 15:40:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.824 15:40:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.824 15:40:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.824 15:40:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.824 15:40:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.824 15:40:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.824 15:40:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.824 15:40:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.824 15:40:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.824 15:40:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.824 15:40:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.824 "name": "raid_bdev1", 00:14:00.824 "uuid": "cab51534-b534-495c-be12-987f7d8c6b60", 00:14:00.824 "strip_size_kb": 0, 00:14:00.824 "state": "online", 00:14:00.824 "raid_level": "raid1", 00:14:00.824 "superblock": false, 00:14:00.824 "num_base_bdevs": 4, 00:14:00.824 "num_base_bdevs_discovered": 3, 00:14:00.824 "num_base_bdevs_operational": 3, 00:14:00.824 "process": { 00:14:00.824 "type": "rebuild", 00:14:00.824 "target": "spare", 00:14:00.824 "progress": { 00:14:00.824 "blocks": 49152, 00:14:00.824 "percent": 75 00:14:00.824 } 00:14:00.824 }, 00:14:00.824 "base_bdevs_list": [ 00:14:00.824 { 00:14:00.824 "name": "spare", 00:14:00.824 "uuid": "44476aee-3e10-5826-a9c1-399eb1ece04e", 00:14:00.824 "is_configured": true, 00:14:00.824 "data_offset": 0, 00:14:00.824 "data_size": 65536 00:14:00.824 }, 00:14:00.824 { 00:14:00.824 "name": null, 00:14:00.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.824 "is_configured": false, 00:14:00.824 "data_offset": 0, 00:14:00.824 "data_size": 65536 00:14:00.824 }, 00:14:00.824 { 00:14:00.824 "name": "BaseBdev3", 00:14:00.824 "uuid": "3ae2ad04-7550-5651-837d-b93f2f3f54b1", 00:14:00.824 "is_configured": true, 00:14:00.824 "data_offset": 0, 00:14:00.824 "data_size": 65536 00:14:00.824 }, 00:14:00.824 { 00:14:00.824 "name": "BaseBdev4", 00:14:00.824 "uuid": "bc5bca78-77c6-5535-9398-666312ff3daa", 00:14:00.824 "is_configured": true, 00:14:00.824 "data_offset": 0, 00:14:00.824 "data_size": 65536 00:14:00.824 } 00:14:00.824 ] 00:14:00.824 }' 00:14:00.824 15:40:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.824 15:40:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.824 15:40:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.084 15:40:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.084 15:40:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:01.654 [2024-11-25 15:41:00.103583] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:01.654 [2024-11-25 15:41:00.103720] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:01.654 [2024-11-25 15:41:00.103796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.915 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:01.915 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.915 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.915 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.915 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.915 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.915 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.915 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.915 15:41:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.915 15:41:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.915 15:41:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.915 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.915 "name": "raid_bdev1", 00:14:01.915 "uuid": "cab51534-b534-495c-be12-987f7d8c6b60", 00:14:01.915 "strip_size_kb": 0, 00:14:01.915 "state": "online", 00:14:01.915 "raid_level": "raid1", 00:14:01.915 "superblock": false, 00:14:01.915 "num_base_bdevs": 4, 00:14:01.915 "num_base_bdevs_discovered": 3, 00:14:01.915 "num_base_bdevs_operational": 3, 00:14:01.915 "base_bdevs_list": [ 00:14:01.915 { 00:14:01.915 "name": "spare", 00:14:01.915 "uuid": "44476aee-3e10-5826-a9c1-399eb1ece04e", 00:14:01.915 "is_configured": true, 00:14:01.915 "data_offset": 0, 00:14:01.915 "data_size": 65536 00:14:01.915 }, 00:14:01.915 { 00:14:01.915 "name": null, 00:14:01.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.915 "is_configured": false, 00:14:01.915 "data_offset": 0, 00:14:01.915 "data_size": 65536 00:14:01.915 }, 00:14:01.915 { 00:14:01.915 "name": "BaseBdev3", 00:14:01.915 "uuid": "3ae2ad04-7550-5651-837d-b93f2f3f54b1", 00:14:01.915 "is_configured": true, 00:14:01.915 "data_offset": 0, 00:14:01.915 "data_size": 65536 00:14:01.915 }, 00:14:01.915 { 00:14:01.915 "name": "BaseBdev4", 00:14:01.915 "uuid": "bc5bca78-77c6-5535-9398-666312ff3daa", 00:14:01.915 "is_configured": true, 00:14:01.915 "data_offset": 0, 00:14:01.915 "data_size": 65536 00:14:01.915 } 00:14:01.915 ] 00:14:01.915 }' 00:14:01.915 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.175 "name": "raid_bdev1", 00:14:02.175 "uuid": "cab51534-b534-495c-be12-987f7d8c6b60", 00:14:02.175 "strip_size_kb": 0, 00:14:02.175 "state": "online", 00:14:02.175 "raid_level": "raid1", 00:14:02.175 "superblock": false, 00:14:02.175 "num_base_bdevs": 4, 00:14:02.175 "num_base_bdevs_discovered": 3, 00:14:02.175 "num_base_bdevs_operational": 3, 00:14:02.175 "base_bdevs_list": [ 00:14:02.175 { 00:14:02.175 "name": "spare", 00:14:02.175 "uuid": "44476aee-3e10-5826-a9c1-399eb1ece04e", 00:14:02.175 "is_configured": true, 00:14:02.175 "data_offset": 0, 00:14:02.175 "data_size": 65536 00:14:02.175 }, 00:14:02.175 { 00:14:02.175 "name": null, 00:14:02.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.175 "is_configured": false, 00:14:02.175 "data_offset": 0, 00:14:02.175 "data_size": 65536 00:14:02.175 }, 00:14:02.175 { 00:14:02.175 "name": "BaseBdev3", 00:14:02.175 "uuid": "3ae2ad04-7550-5651-837d-b93f2f3f54b1", 00:14:02.175 "is_configured": true, 00:14:02.175 "data_offset": 0, 00:14:02.175 "data_size": 65536 00:14:02.175 }, 00:14:02.175 { 00:14:02.175 "name": "BaseBdev4", 00:14:02.175 "uuid": "bc5bca78-77c6-5535-9398-666312ff3daa", 00:14:02.175 "is_configured": true, 00:14:02.175 "data_offset": 0, 00:14:02.175 "data_size": 65536 00:14:02.175 } 00:14:02.175 ] 00:14:02.175 }' 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.175 "name": "raid_bdev1", 00:14:02.175 "uuid": "cab51534-b534-495c-be12-987f7d8c6b60", 00:14:02.175 "strip_size_kb": 0, 00:14:02.175 "state": "online", 00:14:02.175 "raid_level": "raid1", 00:14:02.175 "superblock": false, 00:14:02.175 "num_base_bdevs": 4, 00:14:02.175 "num_base_bdevs_discovered": 3, 00:14:02.175 "num_base_bdevs_operational": 3, 00:14:02.175 "base_bdevs_list": [ 00:14:02.175 { 00:14:02.175 "name": "spare", 00:14:02.175 "uuid": "44476aee-3e10-5826-a9c1-399eb1ece04e", 00:14:02.175 "is_configured": true, 00:14:02.175 "data_offset": 0, 00:14:02.175 "data_size": 65536 00:14:02.175 }, 00:14:02.175 { 00:14:02.175 "name": null, 00:14:02.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.175 "is_configured": false, 00:14:02.175 "data_offset": 0, 00:14:02.175 "data_size": 65536 00:14:02.175 }, 00:14:02.175 { 00:14:02.175 "name": "BaseBdev3", 00:14:02.175 "uuid": "3ae2ad04-7550-5651-837d-b93f2f3f54b1", 00:14:02.175 "is_configured": true, 00:14:02.175 "data_offset": 0, 00:14:02.175 "data_size": 65536 00:14:02.175 }, 00:14:02.175 { 00:14:02.175 "name": "BaseBdev4", 00:14:02.175 "uuid": "bc5bca78-77c6-5535-9398-666312ff3daa", 00:14:02.175 "is_configured": true, 00:14:02.175 "data_offset": 0, 00:14:02.175 "data_size": 65536 00:14:02.175 } 00:14:02.175 ] 00:14:02.175 }' 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.175 15:41:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.746 [2024-11-25 15:41:01.262085] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:02.746 [2024-11-25 15:41:01.262161] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.746 [2024-11-25 15:41:01.262263] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.746 [2024-11-25 15:41:01.262342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.746 [2024-11-25 15:41:01.262352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:02.746 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:03.006 /dev/nbd0 00:14:03.006 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:03.006 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:03.006 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:03.006 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:03.006 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:03.006 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:03.006 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:03.006 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:03.006 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:03.006 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:03.006 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.006 1+0 records in 00:14:03.006 1+0 records out 00:14:03.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042376 s, 9.7 MB/s 00:14:03.006 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.006 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:03.006 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.006 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:03.006 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:03.006 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.006 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:03.006 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:03.267 /dev/nbd1 00:14:03.267 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:03.267 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:03.267 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:03.267 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:03.267 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:03.267 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:03.267 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:03.267 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:03.267 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:03.267 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:03.267 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.267 1+0 records in 00:14:03.267 1+0 records out 00:14:03.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394645 s, 10.4 MB/s 00:14:03.267 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.267 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:03.267 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.267 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:03.267 15:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:03.267 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.267 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:03.267 15:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:03.528 15:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:03.528 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:03.528 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:03.528 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:03.528 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:03.528 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.528 15:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:03.528 15:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:03.528 15:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:03.528 15:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:03.528 15:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.528 15:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.528 15:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:03.528 15:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:03.528 15:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.528 15:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.528 15:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77208 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77208 ']' 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77208 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77208 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:03.789 killing process with pid 77208 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77208' 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77208 00:14:03.789 Received shutdown signal, test time was about 60.000000 seconds 00:14:03.789 00:14:03.789 Latency(us) 00:14:03.789 [2024-11-25T15:41:02.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.789 [2024-11-25T15:41:02.470Z] =================================================================================================================== 00:14:03.789 [2024-11-25T15:41:02.470Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:03.789 [2024-11-25 15:41:02.408277] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:03.789 15:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77208 00:14:04.360 [2024-11-25 15:41:02.867057] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:05.303 00:14:05.303 real 0m16.703s 00:14:05.303 user 0m18.878s 00:14:05.303 sys 0m2.893s 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.303 ************************************ 00:14:05.303 END TEST raid_rebuild_test 00:14:05.303 ************************************ 00:14:05.303 15:41:03 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:05.303 15:41:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:05.303 15:41:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.303 15:41:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:05.303 ************************************ 00:14:05.303 START TEST raid_rebuild_test_sb 00:14:05.303 ************************************ 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77655 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77655 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77655 ']' 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.303 15:41:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.563 15:41:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.563 15:41:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.563 15:41:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.563 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:05.563 Zero copy mechanism will not be used. 00:14:05.563 [2024-11-25 15:41:04.061643] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:14:05.563 [2024-11-25 15:41:04.061775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77655 ] 00:14:05.563 [2024-11-25 15:41:04.213788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.823 [2024-11-25 15:41:04.318661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.082 [2024-11-25 15:41:04.512220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.082 [2024-11-25 15:41:04.512270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.343 BaseBdev1_malloc 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.343 [2024-11-25 15:41:04.933387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:06.343 [2024-11-25 15:41:04.933468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.343 [2024-11-25 15:41:04.933492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:06.343 [2024-11-25 15:41:04.933502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.343 [2024-11-25 15:41:04.935511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.343 [2024-11-25 15:41:04.935551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:06.343 BaseBdev1 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.343 BaseBdev2_malloc 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.343 [2024-11-25 15:41:04.986166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:06.343 [2024-11-25 15:41:04.986238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.343 [2024-11-25 15:41:04.986256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:06.343 [2024-11-25 15:41:04.986268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.343 [2024-11-25 15:41:04.988288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.343 [2024-11-25 15:41:04.988337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:06.343 BaseBdev2 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.343 15:41:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.603 BaseBdev3_malloc 00:14:06.603 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.603 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:06.603 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.603 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.603 [2024-11-25 15:41:05.073124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:06.603 [2024-11-25 15:41:05.073176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.603 [2024-11-25 15:41:05.073213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:06.603 [2024-11-25 15:41:05.073224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.603 [2024-11-25 15:41:05.075221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.603 [2024-11-25 15:41:05.075262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:06.603 BaseBdev3 00:14:06.603 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.603 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.603 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:06.603 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.603 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.603 BaseBdev4_malloc 00:14:06.603 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.603 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:06.603 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.603 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.603 [2024-11-25 15:41:05.126284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:06.603 [2024-11-25 15:41:05.126334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.603 [2024-11-25 15:41:05.126366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:06.603 [2024-11-25 15:41:05.126389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.603 [2024-11-25 15:41:05.128360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.604 [2024-11-25 15:41:05.128415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:06.604 BaseBdev4 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.604 spare_malloc 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.604 spare_delay 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.604 [2024-11-25 15:41:05.193170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:06.604 [2024-11-25 15:41:05.193224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.604 [2024-11-25 15:41:05.193258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:06.604 [2024-11-25 15:41:05.193269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.604 [2024-11-25 15:41:05.195274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.604 [2024-11-25 15:41:05.195313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:06.604 spare 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.604 [2024-11-25 15:41:05.205198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:06.604 [2024-11-25 15:41:05.206894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:06.604 [2024-11-25 15:41:05.206996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:06.604 [2024-11-25 15:41:05.207059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:06.604 [2024-11-25 15:41:05.207233] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:06.604 [2024-11-25 15:41:05.207260] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:06.604 [2024-11-25 15:41:05.207480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:06.604 [2024-11-25 15:41:05.207657] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:06.604 [2024-11-25 15:41:05.207671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:06.604 [2024-11-25 15:41:05.207799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.604 "name": "raid_bdev1", 00:14:06.604 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:06.604 "strip_size_kb": 0, 00:14:06.604 "state": "online", 00:14:06.604 "raid_level": "raid1", 00:14:06.604 "superblock": true, 00:14:06.604 "num_base_bdevs": 4, 00:14:06.604 "num_base_bdevs_discovered": 4, 00:14:06.604 "num_base_bdevs_operational": 4, 00:14:06.604 "base_bdevs_list": [ 00:14:06.604 { 00:14:06.604 "name": "BaseBdev1", 00:14:06.604 "uuid": "7d02a1c3-a51a-5f3b-8e87-6f88f36be3ad", 00:14:06.604 "is_configured": true, 00:14:06.604 "data_offset": 2048, 00:14:06.604 "data_size": 63488 00:14:06.604 }, 00:14:06.604 { 00:14:06.604 "name": "BaseBdev2", 00:14:06.604 "uuid": "c053513e-3d07-5970-a0f7-70d3b28b50f0", 00:14:06.604 "is_configured": true, 00:14:06.604 "data_offset": 2048, 00:14:06.604 "data_size": 63488 00:14:06.604 }, 00:14:06.604 { 00:14:06.604 "name": "BaseBdev3", 00:14:06.604 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:06.604 "is_configured": true, 00:14:06.604 "data_offset": 2048, 00:14:06.604 "data_size": 63488 00:14:06.604 }, 00:14:06.604 { 00:14:06.604 "name": "BaseBdev4", 00:14:06.604 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:06.604 "is_configured": true, 00:14:06.604 "data_offset": 2048, 00:14:06.604 "data_size": 63488 00:14:06.604 } 00:14:06.604 ] 00:14:06.604 }' 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.604 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.175 [2024-11-25 15:41:05.628827] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.175 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:07.435 [2024-11-25 15:41:05.864142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:07.435 /dev/nbd0 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.435 1+0 records in 00:14:07.435 1+0 records out 00:14:07.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391296 s, 10.5 MB/s 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:07.435 15:41:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:12.712 63488+0 records in 00:14:12.712 63488+0 records out 00:14:12.712 32505856 bytes (33 MB, 31 MiB) copied, 4.76197 s, 6.8 MB/s 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:12.712 [2024-11-25 15:41:10.901735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.712 [2024-11-25 15:41:10.917813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.712 "name": "raid_bdev1", 00:14:12.712 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:12.712 "strip_size_kb": 0, 00:14:12.712 "state": "online", 00:14:12.712 "raid_level": "raid1", 00:14:12.712 "superblock": true, 00:14:12.712 "num_base_bdevs": 4, 00:14:12.712 "num_base_bdevs_discovered": 3, 00:14:12.712 "num_base_bdevs_operational": 3, 00:14:12.712 "base_bdevs_list": [ 00:14:12.712 { 00:14:12.712 "name": null, 00:14:12.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.712 "is_configured": false, 00:14:12.712 "data_offset": 0, 00:14:12.712 "data_size": 63488 00:14:12.712 }, 00:14:12.712 { 00:14:12.712 "name": "BaseBdev2", 00:14:12.712 "uuid": "c053513e-3d07-5970-a0f7-70d3b28b50f0", 00:14:12.712 "is_configured": true, 00:14:12.712 "data_offset": 2048, 00:14:12.712 "data_size": 63488 00:14:12.712 }, 00:14:12.712 { 00:14:12.712 "name": "BaseBdev3", 00:14:12.712 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:12.712 "is_configured": true, 00:14:12.712 "data_offset": 2048, 00:14:12.712 "data_size": 63488 00:14:12.712 }, 00:14:12.712 { 00:14:12.712 "name": "BaseBdev4", 00:14:12.712 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:12.712 "is_configured": true, 00:14:12.712 "data_offset": 2048, 00:14:12.712 "data_size": 63488 00:14:12.712 } 00:14:12.712 ] 00:14:12.712 }' 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.712 15:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.712 15:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:12.712 15:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.712 15:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.712 [2024-11-25 15:41:11.321098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:12.712 [2024-11-25 15:41:11.336952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:12.712 15:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.712 15:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:12.712 [2024-11-25 15:41:11.338693] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.094 "name": "raid_bdev1", 00:14:14.094 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:14.094 "strip_size_kb": 0, 00:14:14.094 "state": "online", 00:14:14.094 "raid_level": "raid1", 00:14:14.094 "superblock": true, 00:14:14.094 "num_base_bdevs": 4, 00:14:14.094 "num_base_bdevs_discovered": 4, 00:14:14.094 "num_base_bdevs_operational": 4, 00:14:14.094 "process": { 00:14:14.094 "type": "rebuild", 00:14:14.094 "target": "spare", 00:14:14.094 "progress": { 00:14:14.094 "blocks": 20480, 00:14:14.094 "percent": 32 00:14:14.094 } 00:14:14.094 }, 00:14:14.094 "base_bdevs_list": [ 00:14:14.094 { 00:14:14.094 "name": "spare", 00:14:14.094 "uuid": "a7fbcdbb-4f88-5f32-b51f-83f20ad20ee5", 00:14:14.094 "is_configured": true, 00:14:14.094 "data_offset": 2048, 00:14:14.094 "data_size": 63488 00:14:14.094 }, 00:14:14.094 { 00:14:14.094 "name": "BaseBdev2", 00:14:14.094 "uuid": "c053513e-3d07-5970-a0f7-70d3b28b50f0", 00:14:14.094 "is_configured": true, 00:14:14.094 "data_offset": 2048, 00:14:14.094 "data_size": 63488 00:14:14.094 }, 00:14:14.094 { 00:14:14.094 "name": "BaseBdev3", 00:14:14.094 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:14.094 "is_configured": true, 00:14:14.094 "data_offset": 2048, 00:14:14.094 "data_size": 63488 00:14:14.094 }, 00:14:14.094 { 00:14:14.094 "name": "BaseBdev4", 00:14:14.094 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:14.094 "is_configured": true, 00:14:14.094 "data_offset": 2048, 00:14:14.094 "data_size": 63488 00:14:14.094 } 00:14:14.094 ] 00:14:14.094 }' 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.094 [2024-11-25 15:41:12.503535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.094 [2024-11-25 15:41:12.543424] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:14.094 [2024-11-25 15:41:12.543499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.094 [2024-11-25 15:41:12.543516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.094 [2024-11-25 15:41:12.543525] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.094 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.094 "name": "raid_bdev1", 00:14:14.094 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:14.094 "strip_size_kb": 0, 00:14:14.094 "state": "online", 00:14:14.095 "raid_level": "raid1", 00:14:14.095 "superblock": true, 00:14:14.095 "num_base_bdevs": 4, 00:14:14.095 "num_base_bdevs_discovered": 3, 00:14:14.095 "num_base_bdevs_operational": 3, 00:14:14.095 "base_bdevs_list": [ 00:14:14.095 { 00:14:14.095 "name": null, 00:14:14.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.095 "is_configured": false, 00:14:14.095 "data_offset": 0, 00:14:14.095 "data_size": 63488 00:14:14.095 }, 00:14:14.095 { 00:14:14.095 "name": "BaseBdev2", 00:14:14.095 "uuid": "c053513e-3d07-5970-a0f7-70d3b28b50f0", 00:14:14.095 "is_configured": true, 00:14:14.095 "data_offset": 2048, 00:14:14.095 "data_size": 63488 00:14:14.095 }, 00:14:14.095 { 00:14:14.095 "name": "BaseBdev3", 00:14:14.095 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:14.095 "is_configured": true, 00:14:14.095 "data_offset": 2048, 00:14:14.095 "data_size": 63488 00:14:14.095 }, 00:14:14.095 { 00:14:14.095 "name": "BaseBdev4", 00:14:14.095 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:14.095 "is_configured": true, 00:14:14.095 "data_offset": 2048, 00:14:14.095 "data_size": 63488 00:14:14.095 } 00:14:14.095 ] 00:14:14.095 }' 00:14:14.095 15:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.095 15:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.355 15:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:14.355 15:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.355 15:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:14.355 15:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:14.355 15:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.355 15:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.355 15:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.355 15:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.355 15:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.355 15:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.619 15:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.619 "name": "raid_bdev1", 00:14:14.619 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:14.619 "strip_size_kb": 0, 00:14:14.619 "state": "online", 00:14:14.619 "raid_level": "raid1", 00:14:14.619 "superblock": true, 00:14:14.619 "num_base_bdevs": 4, 00:14:14.619 "num_base_bdevs_discovered": 3, 00:14:14.619 "num_base_bdevs_operational": 3, 00:14:14.619 "base_bdevs_list": [ 00:14:14.619 { 00:14:14.619 "name": null, 00:14:14.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.619 "is_configured": false, 00:14:14.619 "data_offset": 0, 00:14:14.619 "data_size": 63488 00:14:14.619 }, 00:14:14.619 { 00:14:14.619 "name": "BaseBdev2", 00:14:14.619 "uuid": "c053513e-3d07-5970-a0f7-70d3b28b50f0", 00:14:14.619 "is_configured": true, 00:14:14.619 "data_offset": 2048, 00:14:14.619 "data_size": 63488 00:14:14.619 }, 00:14:14.619 { 00:14:14.619 "name": "BaseBdev3", 00:14:14.619 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:14.619 "is_configured": true, 00:14:14.619 "data_offset": 2048, 00:14:14.619 "data_size": 63488 00:14:14.619 }, 00:14:14.619 { 00:14:14.619 "name": "BaseBdev4", 00:14:14.619 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:14.619 "is_configured": true, 00:14:14.619 "data_offset": 2048, 00:14:14.619 "data_size": 63488 00:14:14.619 } 00:14:14.619 ] 00:14:14.619 }' 00:14:14.619 15:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.619 15:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:14.619 15:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.619 15:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:14.619 15:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:14.619 15:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.619 15:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.619 [2024-11-25 15:41:13.179606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:14.620 [2024-11-25 15:41:13.194656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:14.620 15:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.620 15:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:14.620 [2024-11-25 15:41:13.196428] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:15.566 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.566 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.566 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.566 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.566 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.566 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.566 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.566 15:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.566 15:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.566 15:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.827 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.827 "name": "raid_bdev1", 00:14:15.827 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:15.827 "strip_size_kb": 0, 00:14:15.827 "state": "online", 00:14:15.827 "raid_level": "raid1", 00:14:15.827 "superblock": true, 00:14:15.827 "num_base_bdevs": 4, 00:14:15.827 "num_base_bdevs_discovered": 4, 00:14:15.827 "num_base_bdevs_operational": 4, 00:14:15.827 "process": { 00:14:15.827 "type": "rebuild", 00:14:15.827 "target": "spare", 00:14:15.827 "progress": { 00:14:15.827 "blocks": 20480, 00:14:15.827 "percent": 32 00:14:15.827 } 00:14:15.827 }, 00:14:15.827 "base_bdevs_list": [ 00:14:15.827 { 00:14:15.827 "name": "spare", 00:14:15.827 "uuid": "a7fbcdbb-4f88-5f32-b51f-83f20ad20ee5", 00:14:15.827 "is_configured": true, 00:14:15.827 "data_offset": 2048, 00:14:15.827 "data_size": 63488 00:14:15.827 }, 00:14:15.827 { 00:14:15.827 "name": "BaseBdev2", 00:14:15.827 "uuid": "c053513e-3d07-5970-a0f7-70d3b28b50f0", 00:14:15.827 "is_configured": true, 00:14:15.827 "data_offset": 2048, 00:14:15.827 "data_size": 63488 00:14:15.827 }, 00:14:15.827 { 00:14:15.827 "name": "BaseBdev3", 00:14:15.827 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:15.827 "is_configured": true, 00:14:15.827 "data_offset": 2048, 00:14:15.827 "data_size": 63488 00:14:15.827 }, 00:14:15.827 { 00:14:15.827 "name": "BaseBdev4", 00:14:15.827 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:15.827 "is_configured": true, 00:14:15.827 "data_offset": 2048, 00:14:15.827 "data_size": 63488 00:14:15.827 } 00:14:15.827 ] 00:14:15.827 }' 00:14:15.827 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.827 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.827 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.827 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.827 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:15.827 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:15.827 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:15.827 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:15.827 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:15.827 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:15.827 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:15.827 15:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.827 15:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.827 [2024-11-25 15:41:14.344135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:15.827 [2024-11-25 15:41:14.500922] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:15.827 15:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.827 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:15.827 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:15.827 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.827 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.087 "name": "raid_bdev1", 00:14:16.087 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:16.087 "strip_size_kb": 0, 00:14:16.087 "state": "online", 00:14:16.087 "raid_level": "raid1", 00:14:16.087 "superblock": true, 00:14:16.087 "num_base_bdevs": 4, 00:14:16.087 "num_base_bdevs_discovered": 3, 00:14:16.087 "num_base_bdevs_operational": 3, 00:14:16.087 "process": { 00:14:16.087 "type": "rebuild", 00:14:16.087 "target": "spare", 00:14:16.087 "progress": { 00:14:16.087 "blocks": 24576, 00:14:16.087 "percent": 38 00:14:16.087 } 00:14:16.087 }, 00:14:16.087 "base_bdevs_list": [ 00:14:16.087 { 00:14:16.087 "name": "spare", 00:14:16.087 "uuid": "a7fbcdbb-4f88-5f32-b51f-83f20ad20ee5", 00:14:16.087 "is_configured": true, 00:14:16.087 "data_offset": 2048, 00:14:16.087 "data_size": 63488 00:14:16.087 }, 00:14:16.087 { 00:14:16.087 "name": null, 00:14:16.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.087 "is_configured": false, 00:14:16.087 "data_offset": 0, 00:14:16.087 "data_size": 63488 00:14:16.087 }, 00:14:16.087 { 00:14:16.087 "name": "BaseBdev3", 00:14:16.087 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:16.087 "is_configured": true, 00:14:16.087 "data_offset": 2048, 00:14:16.087 "data_size": 63488 00:14:16.087 }, 00:14:16.087 { 00:14:16.087 "name": "BaseBdev4", 00:14:16.087 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:16.087 "is_configured": true, 00:14:16.087 "data_offset": 2048, 00:14:16.087 "data_size": 63488 00:14:16.087 } 00:14:16.087 ] 00:14:16.087 }' 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=447 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.087 "name": "raid_bdev1", 00:14:16.087 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:16.087 "strip_size_kb": 0, 00:14:16.087 "state": "online", 00:14:16.087 "raid_level": "raid1", 00:14:16.087 "superblock": true, 00:14:16.087 "num_base_bdevs": 4, 00:14:16.087 "num_base_bdevs_discovered": 3, 00:14:16.087 "num_base_bdevs_operational": 3, 00:14:16.087 "process": { 00:14:16.087 "type": "rebuild", 00:14:16.087 "target": "spare", 00:14:16.087 "progress": { 00:14:16.087 "blocks": 26624, 00:14:16.087 "percent": 41 00:14:16.087 } 00:14:16.087 }, 00:14:16.087 "base_bdevs_list": [ 00:14:16.087 { 00:14:16.087 "name": "spare", 00:14:16.087 "uuid": "a7fbcdbb-4f88-5f32-b51f-83f20ad20ee5", 00:14:16.087 "is_configured": true, 00:14:16.087 "data_offset": 2048, 00:14:16.087 "data_size": 63488 00:14:16.087 }, 00:14:16.087 { 00:14:16.087 "name": null, 00:14:16.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.087 "is_configured": false, 00:14:16.087 "data_offset": 0, 00:14:16.087 "data_size": 63488 00:14:16.087 }, 00:14:16.087 { 00:14:16.087 "name": "BaseBdev3", 00:14:16.087 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:16.087 "is_configured": true, 00:14:16.087 "data_offset": 2048, 00:14:16.087 "data_size": 63488 00:14:16.087 }, 00:14:16.087 { 00:14:16.087 "name": "BaseBdev4", 00:14:16.087 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:16.087 "is_configured": true, 00:14:16.087 "data_offset": 2048, 00:14:16.087 "data_size": 63488 00:14:16.087 } 00:14:16.087 ] 00:14:16.087 }' 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.087 15:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:17.466 15:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.466 15:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.466 15:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.466 15:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.466 15:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.466 15:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.466 15:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.466 15:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.466 15:41:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.466 15:41:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.466 15:41:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.466 15:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.466 "name": "raid_bdev1", 00:14:17.466 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:17.466 "strip_size_kb": 0, 00:14:17.466 "state": "online", 00:14:17.466 "raid_level": "raid1", 00:14:17.466 "superblock": true, 00:14:17.466 "num_base_bdevs": 4, 00:14:17.466 "num_base_bdevs_discovered": 3, 00:14:17.466 "num_base_bdevs_operational": 3, 00:14:17.466 "process": { 00:14:17.466 "type": "rebuild", 00:14:17.466 "target": "spare", 00:14:17.466 "progress": { 00:14:17.466 "blocks": 49152, 00:14:17.466 "percent": 77 00:14:17.466 } 00:14:17.466 }, 00:14:17.466 "base_bdevs_list": [ 00:14:17.466 { 00:14:17.466 "name": "spare", 00:14:17.466 "uuid": "a7fbcdbb-4f88-5f32-b51f-83f20ad20ee5", 00:14:17.466 "is_configured": true, 00:14:17.466 "data_offset": 2048, 00:14:17.466 "data_size": 63488 00:14:17.466 }, 00:14:17.466 { 00:14:17.466 "name": null, 00:14:17.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.466 "is_configured": false, 00:14:17.466 "data_offset": 0, 00:14:17.466 "data_size": 63488 00:14:17.466 }, 00:14:17.466 { 00:14:17.466 "name": "BaseBdev3", 00:14:17.466 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:17.466 "is_configured": true, 00:14:17.466 "data_offset": 2048, 00:14:17.466 "data_size": 63488 00:14:17.466 }, 00:14:17.466 { 00:14:17.466 "name": "BaseBdev4", 00:14:17.466 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:17.466 "is_configured": true, 00:14:17.466 "data_offset": 2048, 00:14:17.466 "data_size": 63488 00:14:17.466 } 00:14:17.466 ] 00:14:17.466 }' 00:14:17.466 15:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.466 15:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.466 15:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.466 15:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.466 15:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:18.035 [2024-11-25 15:41:16.408116] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:18.035 [2024-11-25 15:41:16.408256] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:18.035 [2024-11-25 15:41:16.408362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.294 15:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.294 15:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.294 15:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.294 15:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.294 15:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.294 15:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.294 15:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.294 15:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.294 15:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.294 15:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.294 15:41:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.554 15:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.554 "name": "raid_bdev1", 00:14:18.554 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:18.554 "strip_size_kb": 0, 00:14:18.554 "state": "online", 00:14:18.554 "raid_level": "raid1", 00:14:18.554 "superblock": true, 00:14:18.554 "num_base_bdevs": 4, 00:14:18.554 "num_base_bdevs_discovered": 3, 00:14:18.554 "num_base_bdevs_operational": 3, 00:14:18.554 "base_bdevs_list": [ 00:14:18.554 { 00:14:18.554 "name": "spare", 00:14:18.554 "uuid": "a7fbcdbb-4f88-5f32-b51f-83f20ad20ee5", 00:14:18.554 "is_configured": true, 00:14:18.554 "data_offset": 2048, 00:14:18.554 "data_size": 63488 00:14:18.554 }, 00:14:18.554 { 00:14:18.554 "name": null, 00:14:18.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.554 "is_configured": false, 00:14:18.554 "data_offset": 0, 00:14:18.554 "data_size": 63488 00:14:18.554 }, 00:14:18.554 { 00:14:18.554 "name": "BaseBdev3", 00:14:18.554 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:18.554 "is_configured": true, 00:14:18.554 "data_offset": 2048, 00:14:18.554 "data_size": 63488 00:14:18.554 }, 00:14:18.554 { 00:14:18.554 "name": "BaseBdev4", 00:14:18.554 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:18.554 "is_configured": true, 00:14:18.554 "data_offset": 2048, 00:14:18.554 "data_size": 63488 00:14:18.554 } 00:14:18.554 ] 00:14:18.554 }' 00:14:18.554 15:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.554 "name": "raid_bdev1", 00:14:18.554 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:18.554 "strip_size_kb": 0, 00:14:18.554 "state": "online", 00:14:18.554 "raid_level": "raid1", 00:14:18.554 "superblock": true, 00:14:18.554 "num_base_bdevs": 4, 00:14:18.554 "num_base_bdevs_discovered": 3, 00:14:18.554 "num_base_bdevs_operational": 3, 00:14:18.554 "base_bdevs_list": [ 00:14:18.554 { 00:14:18.554 "name": "spare", 00:14:18.554 "uuid": "a7fbcdbb-4f88-5f32-b51f-83f20ad20ee5", 00:14:18.554 "is_configured": true, 00:14:18.554 "data_offset": 2048, 00:14:18.554 "data_size": 63488 00:14:18.554 }, 00:14:18.554 { 00:14:18.554 "name": null, 00:14:18.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.554 "is_configured": false, 00:14:18.554 "data_offset": 0, 00:14:18.554 "data_size": 63488 00:14:18.554 }, 00:14:18.554 { 00:14:18.554 "name": "BaseBdev3", 00:14:18.554 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:18.554 "is_configured": true, 00:14:18.554 "data_offset": 2048, 00:14:18.554 "data_size": 63488 00:14:18.554 }, 00:14:18.554 { 00:14:18.554 "name": "BaseBdev4", 00:14:18.554 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:18.554 "is_configured": true, 00:14:18.554 "data_offset": 2048, 00:14:18.554 "data_size": 63488 00:14:18.554 } 00:14:18.554 ] 00:14:18.554 }' 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.554 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.555 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.555 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.555 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.555 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.555 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.555 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.555 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.815 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.815 "name": "raid_bdev1", 00:14:18.815 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:18.815 "strip_size_kb": 0, 00:14:18.815 "state": "online", 00:14:18.815 "raid_level": "raid1", 00:14:18.815 "superblock": true, 00:14:18.815 "num_base_bdevs": 4, 00:14:18.815 "num_base_bdevs_discovered": 3, 00:14:18.815 "num_base_bdevs_operational": 3, 00:14:18.815 "base_bdevs_list": [ 00:14:18.815 { 00:14:18.815 "name": "spare", 00:14:18.815 "uuid": "a7fbcdbb-4f88-5f32-b51f-83f20ad20ee5", 00:14:18.815 "is_configured": true, 00:14:18.815 "data_offset": 2048, 00:14:18.815 "data_size": 63488 00:14:18.815 }, 00:14:18.815 { 00:14:18.815 "name": null, 00:14:18.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.815 "is_configured": false, 00:14:18.815 "data_offset": 0, 00:14:18.815 "data_size": 63488 00:14:18.815 }, 00:14:18.815 { 00:14:18.815 "name": "BaseBdev3", 00:14:18.815 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:18.815 "is_configured": true, 00:14:18.815 "data_offset": 2048, 00:14:18.815 "data_size": 63488 00:14:18.815 }, 00:14:18.815 { 00:14:18.815 "name": "BaseBdev4", 00:14:18.815 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:18.815 "is_configured": true, 00:14:18.815 "data_offset": 2048, 00:14:18.815 "data_size": 63488 00:14:18.815 } 00:14:18.815 ] 00:14:18.815 }' 00:14:18.815 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.815 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.075 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:19.075 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.075 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.075 [2024-11-25 15:41:17.607466] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:19.075 [2024-11-25 15:41:17.607539] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.075 [2024-11-25 15:41:17.607647] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.075 [2024-11-25 15:41:17.607728] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:19.075 [2024-11-25 15:41:17.607738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:19.075 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.075 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.075 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.075 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.075 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:19.075 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.075 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:19.075 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:19.075 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:19.075 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:19.075 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:19.075 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:19.076 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:19.076 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:19.076 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:19.076 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:19.076 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:19.076 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:19.076 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:19.336 /dev/nbd0 00:14:19.336 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:19.336 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:19.336 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:19.336 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:19.336 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:19.336 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:19.336 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:19.336 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:19.336 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:19.336 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:19.336 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:19.336 1+0 records in 00:14:19.336 1+0 records out 00:14:19.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374869 s, 10.9 MB/s 00:14:19.336 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.336 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:19.336 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.336 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:19.336 15:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:19.336 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:19.336 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:19.336 15:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:19.596 /dev/nbd1 00:14:19.596 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:19.596 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:19.596 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:19.596 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:19.596 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:19.596 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:19.596 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:19.596 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:19.596 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:19.596 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:19.596 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:19.596 1+0 records in 00:14:19.596 1+0 records out 00:14:19.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430341 s, 9.5 MB/s 00:14:19.596 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.596 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:19.596 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.596 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:19.596 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:19.596 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:19.596 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:19.597 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:19.856 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:19.856 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:19.856 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:19.856 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:19.856 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:19.856 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:19.856 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:19.856 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:19.856 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:19.856 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:19.856 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:19.856 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:19.856 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:19.856 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:19.856 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:19.856 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:19.856 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:20.115 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:20.115 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:20.115 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:20.115 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:20.115 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:20.115 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:20.115 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:20.115 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:20.115 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:20.115 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:20.115 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.115 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.115 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.115 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:20.116 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.116 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.116 [2024-11-25 15:41:18.729880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:20.116 [2024-11-25 15:41:18.729936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.116 [2024-11-25 15:41:18.729957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:20.116 [2024-11-25 15:41:18.729966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.116 [2024-11-25 15:41:18.732118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.116 [2024-11-25 15:41:18.732198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:20.116 [2024-11-25 15:41:18.732296] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:20.116 [2024-11-25 15:41:18.732346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:20.116 [2024-11-25 15:41:18.732502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:20.116 [2024-11-25 15:41:18.732587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:20.116 spare 00:14:20.116 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.116 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:20.116 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.116 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.375 [2024-11-25 15:41:18.832474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:20.375 [2024-11-25 15:41:18.832499] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:20.375 [2024-11-25 15:41:18.832778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:20.375 [2024-11-25 15:41:18.832952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:20.375 [2024-11-25 15:41:18.832964] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:20.375 [2024-11-25 15:41:18.833158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.375 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.375 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:20.375 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.375 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.375 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.375 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.375 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.375 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.375 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.375 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.375 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.375 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.375 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.375 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.375 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.376 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.376 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.376 "name": "raid_bdev1", 00:14:20.376 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:20.376 "strip_size_kb": 0, 00:14:20.376 "state": "online", 00:14:20.376 "raid_level": "raid1", 00:14:20.376 "superblock": true, 00:14:20.376 "num_base_bdevs": 4, 00:14:20.376 "num_base_bdevs_discovered": 3, 00:14:20.376 "num_base_bdevs_operational": 3, 00:14:20.376 "base_bdevs_list": [ 00:14:20.376 { 00:14:20.376 "name": "spare", 00:14:20.376 "uuid": "a7fbcdbb-4f88-5f32-b51f-83f20ad20ee5", 00:14:20.376 "is_configured": true, 00:14:20.376 "data_offset": 2048, 00:14:20.376 "data_size": 63488 00:14:20.376 }, 00:14:20.376 { 00:14:20.376 "name": null, 00:14:20.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.376 "is_configured": false, 00:14:20.376 "data_offset": 2048, 00:14:20.376 "data_size": 63488 00:14:20.376 }, 00:14:20.376 { 00:14:20.376 "name": "BaseBdev3", 00:14:20.376 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:20.376 "is_configured": true, 00:14:20.376 "data_offset": 2048, 00:14:20.376 "data_size": 63488 00:14:20.376 }, 00:14:20.376 { 00:14:20.376 "name": "BaseBdev4", 00:14:20.376 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:20.376 "is_configured": true, 00:14:20.376 "data_offset": 2048, 00:14:20.376 "data_size": 63488 00:14:20.376 } 00:14:20.376 ] 00:14:20.376 }' 00:14:20.376 15:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.376 15:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.635 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:20.635 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.635 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:20.635 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:20.635 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.635 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.635 15:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.635 15:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.635 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.635 15:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.635 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.635 "name": "raid_bdev1", 00:14:20.635 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:20.635 "strip_size_kb": 0, 00:14:20.635 "state": "online", 00:14:20.635 "raid_level": "raid1", 00:14:20.635 "superblock": true, 00:14:20.635 "num_base_bdevs": 4, 00:14:20.635 "num_base_bdevs_discovered": 3, 00:14:20.635 "num_base_bdevs_operational": 3, 00:14:20.635 "base_bdevs_list": [ 00:14:20.635 { 00:14:20.635 "name": "spare", 00:14:20.635 "uuid": "a7fbcdbb-4f88-5f32-b51f-83f20ad20ee5", 00:14:20.635 "is_configured": true, 00:14:20.635 "data_offset": 2048, 00:14:20.635 "data_size": 63488 00:14:20.635 }, 00:14:20.635 { 00:14:20.635 "name": null, 00:14:20.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.635 "is_configured": false, 00:14:20.635 "data_offset": 2048, 00:14:20.635 "data_size": 63488 00:14:20.635 }, 00:14:20.635 { 00:14:20.635 "name": "BaseBdev3", 00:14:20.635 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:20.635 "is_configured": true, 00:14:20.635 "data_offset": 2048, 00:14:20.635 "data_size": 63488 00:14:20.635 }, 00:14:20.635 { 00:14:20.635 "name": "BaseBdev4", 00:14:20.635 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:20.635 "is_configured": true, 00:14:20.635 "data_offset": 2048, 00:14:20.635 "data_size": 63488 00:14:20.635 } 00:14:20.635 ] 00:14:20.635 }' 00:14:20.635 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.895 [2024-11-25 15:41:19.420712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.895 "name": "raid_bdev1", 00:14:20.895 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:20.895 "strip_size_kb": 0, 00:14:20.895 "state": "online", 00:14:20.895 "raid_level": "raid1", 00:14:20.895 "superblock": true, 00:14:20.895 "num_base_bdevs": 4, 00:14:20.895 "num_base_bdevs_discovered": 2, 00:14:20.895 "num_base_bdevs_operational": 2, 00:14:20.895 "base_bdevs_list": [ 00:14:20.895 { 00:14:20.895 "name": null, 00:14:20.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.895 "is_configured": false, 00:14:20.895 "data_offset": 0, 00:14:20.895 "data_size": 63488 00:14:20.895 }, 00:14:20.895 { 00:14:20.895 "name": null, 00:14:20.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.895 "is_configured": false, 00:14:20.895 "data_offset": 2048, 00:14:20.895 "data_size": 63488 00:14:20.895 }, 00:14:20.895 { 00:14:20.895 "name": "BaseBdev3", 00:14:20.895 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:20.895 "is_configured": true, 00:14:20.895 "data_offset": 2048, 00:14:20.895 "data_size": 63488 00:14:20.895 }, 00:14:20.895 { 00:14:20.895 "name": "BaseBdev4", 00:14:20.895 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:20.895 "is_configured": true, 00:14:20.895 "data_offset": 2048, 00:14:20.895 "data_size": 63488 00:14:20.895 } 00:14:20.895 ] 00:14:20.895 }' 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.895 15:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.463 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:21.463 15:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.463 15:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.463 [2024-11-25 15:41:19.879956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:21.463 [2024-11-25 15:41:19.880218] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:21.463 [2024-11-25 15:41:19.880279] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:21.463 [2024-11-25 15:41:19.880345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:21.463 [2024-11-25 15:41:19.894890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:21.463 15:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.463 15:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:21.463 [2024-11-25 15:41:19.896747] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:22.402 15:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.402 15:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.402 15:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.402 15:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.402 15:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.402 15:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.402 15:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.402 15:41:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.402 15:41:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.402 15:41:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.402 15:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.402 "name": "raid_bdev1", 00:14:22.402 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:22.402 "strip_size_kb": 0, 00:14:22.402 "state": "online", 00:14:22.402 "raid_level": "raid1", 00:14:22.402 "superblock": true, 00:14:22.402 "num_base_bdevs": 4, 00:14:22.402 "num_base_bdevs_discovered": 3, 00:14:22.402 "num_base_bdevs_operational": 3, 00:14:22.402 "process": { 00:14:22.402 "type": "rebuild", 00:14:22.402 "target": "spare", 00:14:22.402 "progress": { 00:14:22.402 "blocks": 20480, 00:14:22.402 "percent": 32 00:14:22.402 } 00:14:22.402 }, 00:14:22.402 "base_bdevs_list": [ 00:14:22.402 { 00:14:22.402 "name": "spare", 00:14:22.402 "uuid": "a7fbcdbb-4f88-5f32-b51f-83f20ad20ee5", 00:14:22.402 "is_configured": true, 00:14:22.402 "data_offset": 2048, 00:14:22.402 "data_size": 63488 00:14:22.402 }, 00:14:22.402 { 00:14:22.402 "name": null, 00:14:22.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.402 "is_configured": false, 00:14:22.402 "data_offset": 2048, 00:14:22.402 "data_size": 63488 00:14:22.402 }, 00:14:22.402 { 00:14:22.402 "name": "BaseBdev3", 00:14:22.402 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:22.402 "is_configured": true, 00:14:22.402 "data_offset": 2048, 00:14:22.402 "data_size": 63488 00:14:22.402 }, 00:14:22.402 { 00:14:22.402 "name": "BaseBdev4", 00:14:22.402 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:22.402 "is_configured": true, 00:14:22.402 "data_offset": 2048, 00:14:22.402 "data_size": 63488 00:14:22.402 } 00:14:22.402 ] 00:14:22.402 }' 00:14:22.402 15:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.402 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.402 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.402 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.402 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:22.402 15:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.402 15:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.402 [2024-11-25 15:41:21.060041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:22.663 [2024-11-25 15:41:21.101477] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:22.663 [2024-11-25 15:41:21.101576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.663 [2024-11-25 15:41:21.101613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:22.663 [2024-11-25 15:41:21.101621] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:22.663 15:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.663 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:22.663 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.663 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.663 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.663 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.663 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:22.663 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.663 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.663 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.663 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.663 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.663 15:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.663 15:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.663 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.663 15:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.663 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.663 "name": "raid_bdev1", 00:14:22.663 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:22.663 "strip_size_kb": 0, 00:14:22.663 "state": "online", 00:14:22.663 "raid_level": "raid1", 00:14:22.663 "superblock": true, 00:14:22.663 "num_base_bdevs": 4, 00:14:22.663 "num_base_bdevs_discovered": 2, 00:14:22.663 "num_base_bdevs_operational": 2, 00:14:22.663 "base_bdevs_list": [ 00:14:22.663 { 00:14:22.663 "name": null, 00:14:22.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.663 "is_configured": false, 00:14:22.663 "data_offset": 0, 00:14:22.663 "data_size": 63488 00:14:22.663 }, 00:14:22.663 { 00:14:22.663 "name": null, 00:14:22.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.663 "is_configured": false, 00:14:22.663 "data_offset": 2048, 00:14:22.663 "data_size": 63488 00:14:22.663 }, 00:14:22.663 { 00:14:22.663 "name": "BaseBdev3", 00:14:22.663 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:22.663 "is_configured": true, 00:14:22.663 "data_offset": 2048, 00:14:22.663 "data_size": 63488 00:14:22.663 }, 00:14:22.663 { 00:14:22.663 "name": "BaseBdev4", 00:14:22.663 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:22.663 "is_configured": true, 00:14:22.663 "data_offset": 2048, 00:14:22.663 "data_size": 63488 00:14:22.663 } 00:14:22.663 ] 00:14:22.663 }' 00:14:22.663 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.663 15:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.924 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:22.924 15:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.924 15:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.924 [2024-11-25 15:41:21.542166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:22.924 [2024-11-25 15:41:21.542265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.924 [2024-11-25 15:41:21.542310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:22.924 [2024-11-25 15:41:21.542338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.924 [2024-11-25 15:41:21.542802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.924 [2024-11-25 15:41:21.542857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:22.924 [2024-11-25 15:41:21.542974] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:22.924 [2024-11-25 15:41:21.543041] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:22.924 [2024-11-25 15:41:21.543090] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:22.924 [2024-11-25 15:41:21.543142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:22.924 [2024-11-25 15:41:21.556890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:22.924 spare 00:14:22.924 15:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.924 [2024-11-25 15:41:21.558695] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:22.924 15:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.305 "name": "raid_bdev1", 00:14:24.305 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:24.305 "strip_size_kb": 0, 00:14:24.305 "state": "online", 00:14:24.305 "raid_level": "raid1", 00:14:24.305 "superblock": true, 00:14:24.305 "num_base_bdevs": 4, 00:14:24.305 "num_base_bdevs_discovered": 3, 00:14:24.305 "num_base_bdevs_operational": 3, 00:14:24.305 "process": { 00:14:24.305 "type": "rebuild", 00:14:24.305 "target": "spare", 00:14:24.305 "progress": { 00:14:24.305 "blocks": 20480, 00:14:24.305 "percent": 32 00:14:24.305 } 00:14:24.305 }, 00:14:24.305 "base_bdevs_list": [ 00:14:24.305 { 00:14:24.305 "name": "spare", 00:14:24.305 "uuid": "a7fbcdbb-4f88-5f32-b51f-83f20ad20ee5", 00:14:24.305 "is_configured": true, 00:14:24.305 "data_offset": 2048, 00:14:24.305 "data_size": 63488 00:14:24.305 }, 00:14:24.305 { 00:14:24.305 "name": null, 00:14:24.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.305 "is_configured": false, 00:14:24.305 "data_offset": 2048, 00:14:24.305 "data_size": 63488 00:14:24.305 }, 00:14:24.305 { 00:14:24.305 "name": "BaseBdev3", 00:14:24.305 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:24.305 "is_configured": true, 00:14:24.305 "data_offset": 2048, 00:14:24.305 "data_size": 63488 00:14:24.305 }, 00:14:24.305 { 00:14:24.305 "name": "BaseBdev4", 00:14:24.305 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:24.305 "is_configured": true, 00:14:24.305 "data_offset": 2048, 00:14:24.305 "data_size": 63488 00:14:24.305 } 00:14:24.305 ] 00:14:24.305 }' 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.305 [2024-11-25 15:41:22.719553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.305 [2024-11-25 15:41:22.763419] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:24.305 [2024-11-25 15:41:22.763475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.305 [2024-11-25 15:41:22.763506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.305 [2024-11-25 15:41:22.763515] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.305 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.306 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.306 15:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.306 15:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.306 15:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.306 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.306 "name": "raid_bdev1", 00:14:24.306 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:24.306 "strip_size_kb": 0, 00:14:24.306 "state": "online", 00:14:24.306 "raid_level": "raid1", 00:14:24.306 "superblock": true, 00:14:24.306 "num_base_bdevs": 4, 00:14:24.306 "num_base_bdevs_discovered": 2, 00:14:24.306 "num_base_bdevs_operational": 2, 00:14:24.306 "base_bdevs_list": [ 00:14:24.306 { 00:14:24.306 "name": null, 00:14:24.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.306 "is_configured": false, 00:14:24.306 "data_offset": 0, 00:14:24.306 "data_size": 63488 00:14:24.306 }, 00:14:24.306 { 00:14:24.306 "name": null, 00:14:24.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.306 "is_configured": false, 00:14:24.306 "data_offset": 2048, 00:14:24.306 "data_size": 63488 00:14:24.306 }, 00:14:24.306 { 00:14:24.306 "name": "BaseBdev3", 00:14:24.306 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:24.306 "is_configured": true, 00:14:24.306 "data_offset": 2048, 00:14:24.306 "data_size": 63488 00:14:24.306 }, 00:14:24.306 { 00:14:24.306 "name": "BaseBdev4", 00:14:24.306 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:24.306 "is_configured": true, 00:14:24.306 "data_offset": 2048, 00:14:24.306 "data_size": 63488 00:14:24.306 } 00:14:24.306 ] 00:14:24.306 }' 00:14:24.306 15:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.306 15:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.566 15:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:24.566 15:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.566 15:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:24.566 15:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:24.566 15:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.566 15:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.566 15:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.566 15:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.566 15:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.566 15:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.826 15:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.826 "name": "raid_bdev1", 00:14:24.826 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:24.826 "strip_size_kb": 0, 00:14:24.826 "state": "online", 00:14:24.826 "raid_level": "raid1", 00:14:24.826 "superblock": true, 00:14:24.826 "num_base_bdevs": 4, 00:14:24.826 "num_base_bdevs_discovered": 2, 00:14:24.826 "num_base_bdevs_operational": 2, 00:14:24.826 "base_bdevs_list": [ 00:14:24.826 { 00:14:24.826 "name": null, 00:14:24.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.826 "is_configured": false, 00:14:24.826 "data_offset": 0, 00:14:24.826 "data_size": 63488 00:14:24.826 }, 00:14:24.826 { 00:14:24.826 "name": null, 00:14:24.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.826 "is_configured": false, 00:14:24.826 "data_offset": 2048, 00:14:24.826 "data_size": 63488 00:14:24.826 }, 00:14:24.826 { 00:14:24.826 "name": "BaseBdev3", 00:14:24.826 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:24.826 "is_configured": true, 00:14:24.826 "data_offset": 2048, 00:14:24.826 "data_size": 63488 00:14:24.826 }, 00:14:24.826 { 00:14:24.826 "name": "BaseBdev4", 00:14:24.826 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:24.826 "is_configured": true, 00:14:24.826 "data_offset": 2048, 00:14:24.826 "data_size": 63488 00:14:24.826 } 00:14:24.826 ] 00:14:24.826 }' 00:14:24.826 15:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.826 15:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:24.826 15:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.826 15:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:24.826 15:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:24.826 15:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.826 15:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.826 15:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.826 15:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:24.826 15:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.826 15:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.826 [2024-11-25 15:41:23.366669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:24.826 [2024-11-25 15:41:23.366729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.826 [2024-11-25 15:41:23.366750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:24.826 [2024-11-25 15:41:23.366760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.826 [2024-11-25 15:41:23.367252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.826 [2024-11-25 15:41:23.367273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:24.826 [2024-11-25 15:41:23.367353] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:24.826 [2024-11-25 15:41:23.367374] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:24.826 [2024-11-25 15:41:23.367383] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:24.826 [2024-11-25 15:41:23.367405] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:24.826 BaseBdev1 00:14:24.826 15:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.826 15:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:25.766 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:25.766 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.766 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.766 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.766 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.766 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.766 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.766 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.766 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.766 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.766 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.766 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.766 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.766 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.766 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.766 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.766 "name": "raid_bdev1", 00:14:25.766 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:25.766 "strip_size_kb": 0, 00:14:25.766 "state": "online", 00:14:25.766 "raid_level": "raid1", 00:14:25.766 "superblock": true, 00:14:25.766 "num_base_bdevs": 4, 00:14:25.766 "num_base_bdevs_discovered": 2, 00:14:25.766 "num_base_bdevs_operational": 2, 00:14:25.766 "base_bdevs_list": [ 00:14:25.766 { 00:14:25.766 "name": null, 00:14:25.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.766 "is_configured": false, 00:14:25.766 "data_offset": 0, 00:14:25.766 "data_size": 63488 00:14:25.766 }, 00:14:25.766 { 00:14:25.766 "name": null, 00:14:25.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.766 "is_configured": false, 00:14:25.766 "data_offset": 2048, 00:14:25.766 "data_size": 63488 00:14:25.766 }, 00:14:25.766 { 00:14:25.766 "name": "BaseBdev3", 00:14:25.766 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:25.766 "is_configured": true, 00:14:25.766 "data_offset": 2048, 00:14:25.766 "data_size": 63488 00:14:25.766 }, 00:14:25.766 { 00:14:25.766 "name": "BaseBdev4", 00:14:25.766 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:25.766 "is_configured": true, 00:14:25.766 "data_offset": 2048, 00:14:25.766 "data_size": 63488 00:14:25.766 } 00:14:25.766 ] 00:14:25.766 }' 00:14:25.766 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.766 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.345 "name": "raid_bdev1", 00:14:26.345 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:26.345 "strip_size_kb": 0, 00:14:26.345 "state": "online", 00:14:26.345 "raid_level": "raid1", 00:14:26.345 "superblock": true, 00:14:26.345 "num_base_bdevs": 4, 00:14:26.345 "num_base_bdevs_discovered": 2, 00:14:26.345 "num_base_bdevs_operational": 2, 00:14:26.345 "base_bdevs_list": [ 00:14:26.345 { 00:14:26.345 "name": null, 00:14:26.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.345 "is_configured": false, 00:14:26.345 "data_offset": 0, 00:14:26.345 "data_size": 63488 00:14:26.345 }, 00:14:26.345 { 00:14:26.345 "name": null, 00:14:26.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.345 "is_configured": false, 00:14:26.345 "data_offset": 2048, 00:14:26.345 "data_size": 63488 00:14:26.345 }, 00:14:26.345 { 00:14:26.345 "name": "BaseBdev3", 00:14:26.345 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:26.345 "is_configured": true, 00:14:26.345 "data_offset": 2048, 00:14:26.345 "data_size": 63488 00:14:26.345 }, 00:14:26.345 { 00:14:26.345 "name": "BaseBdev4", 00:14:26.345 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:26.345 "is_configured": true, 00:14:26.345 "data_offset": 2048, 00:14:26.345 "data_size": 63488 00:14:26.345 } 00:14:26.345 ] 00:14:26.345 }' 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.345 [2024-11-25 15:41:24.951967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:26.345 [2024-11-25 15:41:24.952186] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:26.345 [2024-11-25 15:41:24.952204] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:26.345 request: 00:14:26.345 { 00:14:26.345 "base_bdev": "BaseBdev1", 00:14:26.345 "raid_bdev": "raid_bdev1", 00:14:26.345 "method": "bdev_raid_add_base_bdev", 00:14:26.345 "req_id": 1 00:14:26.345 } 00:14:26.345 Got JSON-RPC error response 00:14:26.345 response: 00:14:26.345 { 00:14:26.345 "code": -22, 00:14:26.345 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:26.345 } 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:26.345 15:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:27.299 15:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:27.299 15:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.299 15:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.299 15:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.299 15:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.299 15:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:27.299 15:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.299 15:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.299 15:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.299 15:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.299 15:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.299 15:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.299 15:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.299 15:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.559 15:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.559 15:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.559 "name": "raid_bdev1", 00:14:27.559 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:27.559 "strip_size_kb": 0, 00:14:27.559 "state": "online", 00:14:27.559 "raid_level": "raid1", 00:14:27.559 "superblock": true, 00:14:27.559 "num_base_bdevs": 4, 00:14:27.559 "num_base_bdevs_discovered": 2, 00:14:27.559 "num_base_bdevs_operational": 2, 00:14:27.559 "base_bdevs_list": [ 00:14:27.559 { 00:14:27.559 "name": null, 00:14:27.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.559 "is_configured": false, 00:14:27.559 "data_offset": 0, 00:14:27.559 "data_size": 63488 00:14:27.559 }, 00:14:27.559 { 00:14:27.559 "name": null, 00:14:27.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.559 "is_configured": false, 00:14:27.559 "data_offset": 2048, 00:14:27.559 "data_size": 63488 00:14:27.559 }, 00:14:27.559 { 00:14:27.559 "name": "BaseBdev3", 00:14:27.559 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:27.559 "is_configured": true, 00:14:27.559 "data_offset": 2048, 00:14:27.559 "data_size": 63488 00:14:27.559 }, 00:14:27.559 { 00:14:27.559 "name": "BaseBdev4", 00:14:27.559 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:27.559 "is_configured": true, 00:14:27.559 "data_offset": 2048, 00:14:27.559 "data_size": 63488 00:14:27.559 } 00:14:27.559 ] 00:14:27.559 }' 00:14:27.559 15:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.559 15:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.819 15:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.820 15:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.820 15:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.820 15:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.820 15:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.820 15:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.820 15:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.820 15:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.820 15:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.820 15:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.820 15:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.820 "name": "raid_bdev1", 00:14:27.820 "uuid": "93460345-9be7-49b1-92e9-1172d5b602e6", 00:14:27.820 "strip_size_kb": 0, 00:14:27.820 "state": "online", 00:14:27.820 "raid_level": "raid1", 00:14:27.820 "superblock": true, 00:14:27.820 "num_base_bdevs": 4, 00:14:27.820 "num_base_bdevs_discovered": 2, 00:14:27.820 "num_base_bdevs_operational": 2, 00:14:27.820 "base_bdevs_list": [ 00:14:27.820 { 00:14:27.820 "name": null, 00:14:27.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.820 "is_configured": false, 00:14:27.820 "data_offset": 0, 00:14:27.820 "data_size": 63488 00:14:27.820 }, 00:14:27.820 { 00:14:27.820 "name": null, 00:14:27.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.820 "is_configured": false, 00:14:27.820 "data_offset": 2048, 00:14:27.820 "data_size": 63488 00:14:27.820 }, 00:14:27.820 { 00:14:27.820 "name": "BaseBdev3", 00:14:27.820 "uuid": "7ac90930-45ec-55f8-b619-365324a68bf8", 00:14:27.820 "is_configured": true, 00:14:27.820 "data_offset": 2048, 00:14:27.820 "data_size": 63488 00:14:27.820 }, 00:14:27.820 { 00:14:27.820 "name": "BaseBdev4", 00:14:27.820 "uuid": "7fba1d78-5113-519d-9504-20814a66cc21", 00:14:27.820 "is_configured": true, 00:14:27.820 "data_offset": 2048, 00:14:27.820 "data_size": 63488 00:14:27.820 } 00:14:27.820 ] 00:14:27.820 }' 00:14:27.820 15:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.080 15:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:28.080 15:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.080 15:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:28.080 15:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77655 00:14:28.080 15:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77655 ']' 00:14:28.080 15:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77655 00:14:28.080 15:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:28.080 15:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:28.080 15:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77655 00:14:28.080 15:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:28.080 15:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:28.080 15:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77655' 00:14:28.080 killing process with pid 77655 00:14:28.080 15:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77655 00:14:28.080 Received shutdown signal, test time was about 60.000000 seconds 00:14:28.080 00:14:28.080 Latency(us) 00:14:28.080 [2024-11-25T15:41:26.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.080 [2024-11-25T15:41:26.761Z] =================================================================================================================== 00:14:28.080 [2024-11-25T15:41:26.761Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:28.080 [2024-11-25 15:41:26.593938] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:28.080 [2024-11-25 15:41:26.594070] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.080 15:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77655 00:14:28.080 [2024-11-25 15:41:26.594140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:28.080 [2024-11-25 15:41:26.594151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:28.650 [2024-11-25 15:41:27.052306] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:29.591 00:14:29.591 real 0m24.103s 00:14:29.591 user 0m29.421s 00:14:29.591 sys 0m3.522s 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.591 ************************************ 00:14:29.591 END TEST raid_rebuild_test_sb 00:14:29.591 ************************************ 00:14:29.591 15:41:28 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:29.591 15:41:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:29.591 15:41:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.591 15:41:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:29.591 ************************************ 00:14:29.591 START TEST raid_rebuild_test_io 00:14:29.591 ************************************ 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78397 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78397 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78397 ']' 00:14:29.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.591 15:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.592 15:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.592 15:41:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.592 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:29.592 Zero copy mechanism will not be used. 00:14:29.592 [2024-11-25 15:41:28.248386] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:14:29.592 [2024-11-25 15:41:28.248490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78397 ] 00:14:29.852 [2024-11-25 15:41:28.422024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.852 [2024-11-25 15:41:28.531877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.112 [2024-11-25 15:41:28.726783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.112 [2024-11-25 15:41:28.726814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.685 BaseBdev1_malloc 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.685 [2024-11-25 15:41:29.111261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:30.685 [2024-11-25 15:41:29.111427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.685 [2024-11-25 15:41:29.111456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:30.685 [2024-11-25 15:41:29.111467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.685 [2024-11-25 15:41:29.113462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.685 [2024-11-25 15:41:29.113513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:30.685 BaseBdev1 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.685 BaseBdev2_malloc 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.685 [2024-11-25 15:41:29.164557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:30.685 [2024-11-25 15:41:29.164630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.685 [2024-11-25 15:41:29.164664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:30.685 [2024-11-25 15:41:29.164674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.685 [2024-11-25 15:41:29.166590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.685 [2024-11-25 15:41:29.166680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:30.685 BaseBdev2 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.685 BaseBdev3_malloc 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.685 [2024-11-25 15:41:29.243881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:30.685 [2024-11-25 15:41:29.243996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.685 [2024-11-25 15:41:29.244065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:30.685 [2024-11-25 15:41:29.244107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.685 [2024-11-25 15:41:29.246062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.685 [2024-11-25 15:41:29.246128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:30.685 BaseBdev3 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.685 BaseBdev4_malloc 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.685 [2024-11-25 15:41:29.296926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:30.685 [2024-11-25 15:41:29.296981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.685 [2024-11-25 15:41:29.297014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:30.685 [2024-11-25 15:41:29.297035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.685 [2024-11-25 15:41:29.298962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.685 [2024-11-25 15:41:29.299093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:30.685 BaseBdev4 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.685 spare_malloc 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.685 spare_delay 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.685 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.685 [2024-11-25 15:41:29.362210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:30.685 [2024-11-25 15:41:29.362268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.685 [2024-11-25 15:41:29.362302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:30.685 [2024-11-25 15:41:29.362312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.685 [2024-11-25 15:41:29.364271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.949 [2024-11-25 15:41:29.364362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:30.949 spare 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.949 [2024-11-25 15:41:29.374218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.949 [2024-11-25 15:41:29.375912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.949 [2024-11-25 15:41:29.375977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:30.949 [2024-11-25 15:41:29.376039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:30.949 [2024-11-25 15:41:29.376111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:30.949 [2024-11-25 15:41:29.376123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:30.949 [2024-11-25 15:41:29.376349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:30.949 [2024-11-25 15:41:29.376524] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:30.949 [2024-11-25 15:41:29.376536] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:30.949 [2024-11-25 15:41:29.376676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.949 "name": "raid_bdev1", 00:14:30.949 "uuid": "e3c8e3e0-98f6-4900-94f6-767b137585e1", 00:14:30.949 "strip_size_kb": 0, 00:14:30.949 "state": "online", 00:14:30.949 "raid_level": "raid1", 00:14:30.949 "superblock": false, 00:14:30.949 "num_base_bdevs": 4, 00:14:30.949 "num_base_bdevs_discovered": 4, 00:14:30.949 "num_base_bdevs_operational": 4, 00:14:30.949 "base_bdevs_list": [ 00:14:30.949 { 00:14:30.949 "name": "BaseBdev1", 00:14:30.949 "uuid": "03a2cc42-e5bc-56d5-8ff6-34d44fc1d84c", 00:14:30.949 "is_configured": true, 00:14:30.949 "data_offset": 0, 00:14:30.949 "data_size": 65536 00:14:30.949 }, 00:14:30.949 { 00:14:30.949 "name": "BaseBdev2", 00:14:30.949 "uuid": "c47aa9fc-7463-5d6d-8fdb-053b6b1d0de0", 00:14:30.949 "is_configured": true, 00:14:30.949 "data_offset": 0, 00:14:30.949 "data_size": 65536 00:14:30.949 }, 00:14:30.949 { 00:14:30.949 "name": "BaseBdev3", 00:14:30.949 "uuid": "cb30c6eb-abb3-5cbd-8022-1485710e92fd", 00:14:30.949 "is_configured": true, 00:14:30.949 "data_offset": 0, 00:14:30.949 "data_size": 65536 00:14:30.949 }, 00:14:30.949 { 00:14:30.949 "name": "BaseBdev4", 00:14:30.949 "uuid": "039b0c8a-eda4-5e3d-843e-48743ab1f398", 00:14:30.949 "is_configured": true, 00:14:30.949 "data_offset": 0, 00:14:30.949 "data_size": 65536 00:14:30.949 } 00:14:30.949 ] 00:14:30.949 }' 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.949 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.209 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:31.209 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:31.209 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.209 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.209 [2024-11-25 15:41:29.769877] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.209 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.209 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:31.209 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.209 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.210 [2024-11-25 15:41:29.845386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.210 "name": "raid_bdev1", 00:14:31.210 "uuid": "e3c8e3e0-98f6-4900-94f6-767b137585e1", 00:14:31.210 "strip_size_kb": 0, 00:14:31.210 "state": "online", 00:14:31.210 "raid_level": "raid1", 00:14:31.210 "superblock": false, 00:14:31.210 "num_base_bdevs": 4, 00:14:31.210 "num_base_bdevs_discovered": 3, 00:14:31.210 "num_base_bdevs_operational": 3, 00:14:31.210 "base_bdevs_list": [ 00:14:31.210 { 00:14:31.210 "name": null, 00:14:31.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.210 "is_configured": false, 00:14:31.210 "data_offset": 0, 00:14:31.210 "data_size": 65536 00:14:31.210 }, 00:14:31.210 { 00:14:31.210 "name": "BaseBdev2", 00:14:31.210 "uuid": "c47aa9fc-7463-5d6d-8fdb-053b6b1d0de0", 00:14:31.210 "is_configured": true, 00:14:31.210 "data_offset": 0, 00:14:31.210 "data_size": 65536 00:14:31.210 }, 00:14:31.210 { 00:14:31.210 "name": "BaseBdev3", 00:14:31.210 "uuid": "cb30c6eb-abb3-5cbd-8022-1485710e92fd", 00:14:31.210 "is_configured": true, 00:14:31.210 "data_offset": 0, 00:14:31.210 "data_size": 65536 00:14:31.210 }, 00:14:31.210 { 00:14:31.210 "name": "BaseBdev4", 00:14:31.210 "uuid": "039b0c8a-eda4-5e3d-843e-48743ab1f398", 00:14:31.210 "is_configured": true, 00:14:31.210 "data_offset": 0, 00:14:31.210 "data_size": 65536 00:14:31.210 } 00:14:31.210 ] 00:14:31.210 }' 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.210 15:41:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.470 [2024-11-25 15:41:29.937515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:31.470 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:31.470 Zero copy mechanism will not be used. 00:14:31.470 Running I/O for 60 seconds... 00:14:31.730 15:41:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:31.730 15:41:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.730 15:41:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.730 [2024-11-25 15:41:30.312100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:31.730 15:41:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.730 15:41:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:31.730 [2024-11-25 15:41:30.382798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:31.730 [2024-11-25 15:41:30.384799] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:31.990 [2024-11-25 15:41:30.515560] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:31.990 [2024-11-25 15:41:30.643720] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:31.990 [2024-11-25 15:41:30.644170] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:32.249 [2024-11-25 15:41:30.878492] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:32.510 185.00 IOPS, 555.00 MiB/s [2024-11-25T15:41:31.191Z] [2024-11-25 15:41:31.007074] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:32.510 [2024-11-25 15:41:31.007782] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:32.770 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.770 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.770 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.770 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.770 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.770 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.770 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.770 15:41:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.770 15:41:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.770 [2024-11-25 15:41:31.369596] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:32.770 15:41:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.770 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.770 "name": "raid_bdev1", 00:14:32.770 "uuid": "e3c8e3e0-98f6-4900-94f6-767b137585e1", 00:14:32.770 "strip_size_kb": 0, 00:14:32.770 "state": "online", 00:14:32.770 "raid_level": "raid1", 00:14:32.770 "superblock": false, 00:14:32.771 "num_base_bdevs": 4, 00:14:32.771 "num_base_bdevs_discovered": 4, 00:14:32.771 "num_base_bdevs_operational": 4, 00:14:32.771 "process": { 00:14:32.771 "type": "rebuild", 00:14:32.771 "target": "spare", 00:14:32.771 "progress": { 00:14:32.771 "blocks": 14336, 00:14:32.771 "percent": 21 00:14:32.771 } 00:14:32.771 }, 00:14:32.771 "base_bdevs_list": [ 00:14:32.771 { 00:14:32.771 "name": "spare", 00:14:32.771 "uuid": "bbd1cc4c-5901-541c-898b-4b104342bbcb", 00:14:32.771 "is_configured": true, 00:14:32.771 "data_offset": 0, 00:14:32.771 "data_size": 65536 00:14:32.771 }, 00:14:32.771 { 00:14:32.771 "name": "BaseBdev2", 00:14:32.771 "uuid": "c47aa9fc-7463-5d6d-8fdb-053b6b1d0de0", 00:14:32.771 "is_configured": true, 00:14:32.771 "data_offset": 0, 00:14:32.771 "data_size": 65536 00:14:32.771 }, 00:14:32.771 { 00:14:32.771 "name": "BaseBdev3", 00:14:32.771 "uuid": "cb30c6eb-abb3-5cbd-8022-1485710e92fd", 00:14:32.771 "is_configured": true, 00:14:32.771 "data_offset": 0, 00:14:32.771 "data_size": 65536 00:14:32.771 }, 00:14:32.771 { 00:14:32.771 "name": "BaseBdev4", 00:14:32.771 "uuid": "039b0c8a-eda4-5e3d-843e-48743ab1f398", 00:14:32.771 "is_configured": true, 00:14:32.771 "data_offset": 0, 00:14:32.771 "data_size": 65536 00:14:32.771 } 00:14:32.771 ] 00:14:32.771 }' 00:14:32.771 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.031 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.031 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.031 [2024-11-25 15:41:31.486894] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:33.031 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.031 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:33.031 15:41:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.031 15:41:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.031 [2024-11-25 15:41:31.506290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.031 [2024-11-25 15:41:31.597814] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:33.031 [2024-11-25 15:41:31.599123] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:33.031 [2024-11-25 15:41:31.607776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.032 [2024-11-25 15:41:31.607882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.032 [2024-11-25 15:41:31.607898] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:33.032 [2024-11-25 15:41:31.640284] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:33.032 15:41:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.032 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:33.032 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.032 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.032 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.032 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.032 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.032 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.032 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.032 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.032 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.032 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.032 15:41:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.032 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.032 15:41:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.032 15:41:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.032 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.032 "name": "raid_bdev1", 00:14:33.032 "uuid": "e3c8e3e0-98f6-4900-94f6-767b137585e1", 00:14:33.032 "strip_size_kb": 0, 00:14:33.032 "state": "online", 00:14:33.032 "raid_level": "raid1", 00:14:33.032 "superblock": false, 00:14:33.032 "num_base_bdevs": 4, 00:14:33.032 "num_base_bdevs_discovered": 3, 00:14:33.032 "num_base_bdevs_operational": 3, 00:14:33.032 "base_bdevs_list": [ 00:14:33.032 { 00:14:33.032 "name": null, 00:14:33.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.032 "is_configured": false, 00:14:33.032 "data_offset": 0, 00:14:33.032 "data_size": 65536 00:14:33.032 }, 00:14:33.032 { 00:14:33.032 "name": "BaseBdev2", 00:14:33.032 "uuid": "c47aa9fc-7463-5d6d-8fdb-053b6b1d0de0", 00:14:33.032 "is_configured": true, 00:14:33.032 "data_offset": 0, 00:14:33.032 "data_size": 65536 00:14:33.032 }, 00:14:33.032 { 00:14:33.032 "name": "BaseBdev3", 00:14:33.032 "uuid": "cb30c6eb-abb3-5cbd-8022-1485710e92fd", 00:14:33.032 "is_configured": true, 00:14:33.032 "data_offset": 0, 00:14:33.032 "data_size": 65536 00:14:33.032 }, 00:14:33.032 { 00:14:33.032 "name": "BaseBdev4", 00:14:33.032 "uuid": "039b0c8a-eda4-5e3d-843e-48743ab1f398", 00:14:33.032 "is_configured": true, 00:14:33.032 "data_offset": 0, 00:14:33.032 "data_size": 65536 00:14:33.032 } 00:14:33.032 ] 00:14:33.032 }' 00:14:33.032 15:41:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.032 15:41:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.553 153.50 IOPS, 460.50 MiB/s [2024-11-25T15:41:32.234Z] 15:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.553 15:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.553 15:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.553 15:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.553 15:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.553 15:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.553 15:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.553 15:41:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.553 15:41:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.553 15:41:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.553 15:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.553 "name": "raid_bdev1", 00:14:33.553 "uuid": "e3c8e3e0-98f6-4900-94f6-767b137585e1", 00:14:33.553 "strip_size_kb": 0, 00:14:33.553 "state": "online", 00:14:33.553 "raid_level": "raid1", 00:14:33.553 "superblock": false, 00:14:33.553 "num_base_bdevs": 4, 00:14:33.553 "num_base_bdevs_discovered": 3, 00:14:33.553 "num_base_bdevs_operational": 3, 00:14:33.553 "base_bdevs_list": [ 00:14:33.553 { 00:14:33.553 "name": null, 00:14:33.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.553 "is_configured": false, 00:14:33.553 "data_offset": 0, 00:14:33.553 "data_size": 65536 00:14:33.553 }, 00:14:33.553 { 00:14:33.553 "name": "BaseBdev2", 00:14:33.553 "uuid": "c47aa9fc-7463-5d6d-8fdb-053b6b1d0de0", 00:14:33.553 "is_configured": true, 00:14:33.553 "data_offset": 0, 00:14:33.553 "data_size": 65536 00:14:33.553 }, 00:14:33.553 { 00:14:33.553 "name": "BaseBdev3", 00:14:33.553 "uuid": "cb30c6eb-abb3-5cbd-8022-1485710e92fd", 00:14:33.553 "is_configured": true, 00:14:33.553 "data_offset": 0, 00:14:33.553 "data_size": 65536 00:14:33.553 }, 00:14:33.553 { 00:14:33.553 "name": "BaseBdev4", 00:14:33.553 "uuid": "039b0c8a-eda4-5e3d-843e-48743ab1f398", 00:14:33.553 "is_configured": true, 00:14:33.553 "data_offset": 0, 00:14:33.553 "data_size": 65536 00:14:33.553 } 00:14:33.553 ] 00:14:33.553 }' 00:14:33.553 15:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.814 15:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:33.814 15:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.814 15:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:33.814 15:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:33.814 15:41:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.814 15:41:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.814 [2024-11-25 15:41:32.271604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:33.814 15:41:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.814 15:41:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:33.814 [2024-11-25 15:41:32.349883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:33.814 [2024-11-25 15:41:32.351815] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:33.814 [2024-11-25 15:41:32.473460] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:33.814 [2024-11-25 15:41:32.474793] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:34.074 [2024-11-25 15:41:32.689327] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:34.074 [2024-11-25 15:41:32.690148] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:34.594 160.00 IOPS, 480.00 MiB/s [2024-11-25T15:41:33.275Z] [2024-11-25 15:41:33.037712] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:34.594 [2024-11-25 15:41:33.038427] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:34.594 [2024-11-25 15:41:33.249271] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:34.594 [2024-11-25 15:41:33.249649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.855 "name": "raid_bdev1", 00:14:34.855 "uuid": "e3c8e3e0-98f6-4900-94f6-767b137585e1", 00:14:34.855 "strip_size_kb": 0, 00:14:34.855 "state": "online", 00:14:34.855 "raid_level": "raid1", 00:14:34.855 "superblock": false, 00:14:34.855 "num_base_bdevs": 4, 00:14:34.855 "num_base_bdevs_discovered": 4, 00:14:34.855 "num_base_bdevs_operational": 4, 00:14:34.855 "process": { 00:14:34.855 "type": "rebuild", 00:14:34.855 "target": "spare", 00:14:34.855 "progress": { 00:14:34.855 "blocks": 10240, 00:14:34.855 "percent": 15 00:14:34.855 } 00:14:34.855 }, 00:14:34.855 "base_bdevs_list": [ 00:14:34.855 { 00:14:34.855 "name": "spare", 00:14:34.855 "uuid": "bbd1cc4c-5901-541c-898b-4b104342bbcb", 00:14:34.855 "is_configured": true, 00:14:34.855 "data_offset": 0, 00:14:34.855 "data_size": 65536 00:14:34.855 }, 00:14:34.855 { 00:14:34.855 "name": "BaseBdev2", 00:14:34.855 "uuid": "c47aa9fc-7463-5d6d-8fdb-053b6b1d0de0", 00:14:34.855 "is_configured": true, 00:14:34.855 "data_offset": 0, 00:14:34.855 "data_size": 65536 00:14:34.855 }, 00:14:34.855 { 00:14:34.855 "name": "BaseBdev3", 00:14:34.855 "uuid": "cb30c6eb-abb3-5cbd-8022-1485710e92fd", 00:14:34.855 "is_configured": true, 00:14:34.855 "data_offset": 0, 00:14:34.855 "data_size": 65536 00:14:34.855 }, 00:14:34.855 { 00:14:34.855 "name": "BaseBdev4", 00:14:34.855 "uuid": "039b0c8a-eda4-5e3d-843e-48743ab1f398", 00:14:34.855 "is_configured": true, 00:14:34.855 "data_offset": 0, 00:14:34.855 "data_size": 65536 00:14:34.855 } 00:14:34.855 ] 00:14:34.855 }' 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.855 [2024-11-25 15:41:33.484942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:34.855 [2024-11-25 15:41:33.507504] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:34.855 [2024-11-25 15:41:33.507559] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:34.855 [2024-11-25 15:41:33.514613] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.855 15:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.116 15:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.116 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.116 "name": "raid_bdev1", 00:14:35.116 "uuid": "e3c8e3e0-98f6-4900-94f6-767b137585e1", 00:14:35.116 "strip_size_kb": 0, 00:14:35.116 "state": "online", 00:14:35.116 "raid_level": "raid1", 00:14:35.116 "superblock": false, 00:14:35.116 "num_base_bdevs": 4, 00:14:35.116 "num_base_bdevs_discovered": 3, 00:14:35.116 "num_base_bdevs_operational": 3, 00:14:35.116 "process": { 00:14:35.116 "type": "rebuild", 00:14:35.116 "target": "spare", 00:14:35.116 "progress": { 00:14:35.116 "blocks": 14336, 00:14:35.116 "percent": 21 00:14:35.116 } 00:14:35.116 }, 00:14:35.116 "base_bdevs_list": [ 00:14:35.116 { 00:14:35.116 "name": "spare", 00:14:35.116 "uuid": "bbd1cc4c-5901-541c-898b-4b104342bbcb", 00:14:35.116 "is_configured": true, 00:14:35.116 "data_offset": 0, 00:14:35.116 "data_size": 65536 00:14:35.116 }, 00:14:35.116 { 00:14:35.116 "name": null, 00:14:35.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.116 "is_configured": false, 00:14:35.116 "data_offset": 0, 00:14:35.116 "data_size": 65536 00:14:35.116 }, 00:14:35.116 { 00:14:35.116 "name": "BaseBdev3", 00:14:35.116 "uuid": "cb30c6eb-abb3-5cbd-8022-1485710e92fd", 00:14:35.116 "is_configured": true, 00:14:35.116 "data_offset": 0, 00:14:35.116 "data_size": 65536 00:14:35.116 }, 00:14:35.116 { 00:14:35.116 "name": "BaseBdev4", 00:14:35.116 "uuid": "039b0c8a-eda4-5e3d-843e-48743ab1f398", 00:14:35.116 "is_configured": true, 00:14:35.116 "data_offset": 0, 00:14:35.116 "data_size": 65536 00:14:35.116 } 00:14:35.116 ] 00:14:35.116 }' 00:14:35.116 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.116 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.116 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.116 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.116 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=466 00:14:35.116 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:35.116 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.116 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.116 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.116 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.116 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.116 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.116 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.116 15:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.116 15:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.116 15:41:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.116 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.116 "name": "raid_bdev1", 00:14:35.116 "uuid": "e3c8e3e0-98f6-4900-94f6-767b137585e1", 00:14:35.116 "strip_size_kb": 0, 00:14:35.116 "state": "online", 00:14:35.116 "raid_level": "raid1", 00:14:35.116 "superblock": false, 00:14:35.116 "num_base_bdevs": 4, 00:14:35.116 "num_base_bdevs_discovered": 3, 00:14:35.116 "num_base_bdevs_operational": 3, 00:14:35.116 "process": { 00:14:35.116 "type": "rebuild", 00:14:35.116 "target": "spare", 00:14:35.116 "progress": { 00:14:35.116 "blocks": 14336, 00:14:35.116 "percent": 21 00:14:35.116 } 00:14:35.116 }, 00:14:35.116 "base_bdevs_list": [ 00:14:35.116 { 00:14:35.116 "name": "spare", 00:14:35.116 "uuid": "bbd1cc4c-5901-541c-898b-4b104342bbcb", 00:14:35.116 "is_configured": true, 00:14:35.116 "data_offset": 0, 00:14:35.116 "data_size": 65536 00:14:35.116 }, 00:14:35.116 { 00:14:35.116 "name": null, 00:14:35.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.116 "is_configured": false, 00:14:35.116 "data_offset": 0, 00:14:35.116 "data_size": 65536 00:14:35.116 }, 00:14:35.116 { 00:14:35.116 "name": "BaseBdev3", 00:14:35.117 "uuid": "cb30c6eb-abb3-5cbd-8022-1485710e92fd", 00:14:35.117 "is_configured": true, 00:14:35.117 "data_offset": 0, 00:14:35.117 "data_size": 65536 00:14:35.117 }, 00:14:35.117 { 00:14:35.117 "name": "BaseBdev4", 00:14:35.117 "uuid": "039b0c8a-eda4-5e3d-843e-48743ab1f398", 00:14:35.117 "is_configured": true, 00:14:35.117 "data_offset": 0, 00:14:35.117 "data_size": 65536 00:14:35.117 } 00:14:35.117 ] 00:14:35.117 }' 00:14:35.117 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.117 [2024-11-25 15:41:33.725388] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:35.117 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.117 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.377 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.377 15:41:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:35.377 142.25 IOPS, 426.75 MiB/s [2024-11-25T15:41:34.058Z] [2024-11-25 15:41:33.954503] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:35.637 [2024-11-25 15:41:34.174506] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:36.207 [2024-11-25 15:41:34.639193] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:36.207 15:41:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:36.207 15:41:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.207 15:41:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.207 15:41:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.207 15:41:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.207 15:41:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.207 15:41:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.207 15:41:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.207 15:41:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.207 15:41:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.207 15:41:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.207 15:41:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.207 "name": "raid_bdev1", 00:14:36.207 "uuid": "e3c8e3e0-98f6-4900-94f6-767b137585e1", 00:14:36.207 "strip_size_kb": 0, 00:14:36.207 "state": "online", 00:14:36.207 "raid_level": "raid1", 00:14:36.207 "superblock": false, 00:14:36.207 "num_base_bdevs": 4, 00:14:36.207 "num_base_bdevs_discovered": 3, 00:14:36.207 "num_base_bdevs_operational": 3, 00:14:36.207 "process": { 00:14:36.207 "type": "rebuild", 00:14:36.207 "target": "spare", 00:14:36.207 "progress": { 00:14:36.207 "blocks": 28672, 00:14:36.207 "percent": 43 00:14:36.207 } 00:14:36.207 }, 00:14:36.207 "base_bdevs_list": [ 00:14:36.207 { 00:14:36.207 "name": "spare", 00:14:36.207 "uuid": "bbd1cc4c-5901-541c-898b-4b104342bbcb", 00:14:36.207 "is_configured": true, 00:14:36.207 "data_offset": 0, 00:14:36.207 "data_size": 65536 00:14:36.207 }, 00:14:36.207 { 00:14:36.207 "name": null, 00:14:36.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.207 "is_configured": false, 00:14:36.207 "data_offset": 0, 00:14:36.207 "data_size": 65536 00:14:36.207 }, 00:14:36.207 { 00:14:36.207 "name": "BaseBdev3", 00:14:36.207 "uuid": "cb30c6eb-abb3-5cbd-8022-1485710e92fd", 00:14:36.207 "is_configured": true, 00:14:36.207 "data_offset": 0, 00:14:36.207 "data_size": 65536 00:14:36.207 }, 00:14:36.207 { 00:14:36.207 "name": "BaseBdev4", 00:14:36.207 "uuid": "039b0c8a-eda4-5e3d-843e-48743ab1f398", 00:14:36.207 "is_configured": true, 00:14:36.207 "data_offset": 0, 00:14:36.207 "data_size": 65536 00:14:36.207 } 00:14:36.207 ] 00:14:36.207 }' 00:14:36.207 15:41:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.468 15:41:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.468 15:41:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.468 124.80 IOPS, 374.40 MiB/s [2024-11-25T15:41:35.149Z] 15:41:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.468 15:41:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:36.468 [2024-11-25 15:41:34.978198] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:36.468 [2024-11-25 15:41:35.093418] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:36.728 [2024-11-25 15:41:35.314272] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:36.989 [2024-11-25 15:41:35.423239] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:37.250 [2024-11-25 15:41:35.727086] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:37.538 113.33 IOPS, 340.00 MiB/s [2024-11-25T15:41:36.219Z] 15:41:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:37.538 15:41:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.538 15:41:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.538 15:41:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.538 15:41:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.538 15:41:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.538 15:41:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.538 15:41:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.538 15:41:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.538 15:41:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.538 15:41:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.538 15:41:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.538 "name": "raid_bdev1", 00:14:37.538 "uuid": "e3c8e3e0-98f6-4900-94f6-767b137585e1", 00:14:37.538 "strip_size_kb": 0, 00:14:37.538 "state": "online", 00:14:37.538 "raid_level": "raid1", 00:14:37.538 "superblock": false, 00:14:37.538 "num_base_bdevs": 4, 00:14:37.538 "num_base_bdevs_discovered": 3, 00:14:37.538 "num_base_bdevs_operational": 3, 00:14:37.538 "process": { 00:14:37.538 "type": "rebuild", 00:14:37.538 "target": "spare", 00:14:37.538 "progress": { 00:14:37.538 "blocks": 49152, 00:14:37.538 "percent": 75 00:14:37.538 } 00:14:37.538 }, 00:14:37.538 "base_bdevs_list": [ 00:14:37.538 { 00:14:37.538 "name": "spare", 00:14:37.538 "uuid": "bbd1cc4c-5901-541c-898b-4b104342bbcb", 00:14:37.538 "is_configured": true, 00:14:37.538 "data_offset": 0, 00:14:37.538 "data_size": 65536 00:14:37.538 }, 00:14:37.538 { 00:14:37.538 "name": null, 00:14:37.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.538 "is_configured": false, 00:14:37.538 "data_offset": 0, 00:14:37.538 "data_size": 65536 00:14:37.538 }, 00:14:37.538 { 00:14:37.538 "name": "BaseBdev3", 00:14:37.538 "uuid": "cb30c6eb-abb3-5cbd-8022-1485710e92fd", 00:14:37.538 "is_configured": true, 00:14:37.538 "data_offset": 0, 00:14:37.538 "data_size": 65536 00:14:37.538 }, 00:14:37.538 { 00:14:37.538 "name": "BaseBdev4", 00:14:37.538 "uuid": "039b0c8a-eda4-5e3d-843e-48743ab1f398", 00:14:37.538 "is_configured": true, 00:14:37.538 "data_offset": 0, 00:14:37.538 "data_size": 65536 00:14:37.538 } 00:14:37.538 ] 00:14:37.538 }' 00:14:37.538 15:41:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.538 [2024-11-25 15:41:36.038878] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:37.538 15:41:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.538 15:41:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.538 15:41:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.538 15:41:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:37.805 [2024-11-25 15:41:36.466411] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:38.375 [2024-11-25 15:41:36.788379] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:38.375 [2024-11-25 15:41:36.888187] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:38.375 [2024-11-25 15:41:36.896543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.635 103.14 IOPS, 309.43 MiB/s [2024-11-25T15:41:37.316Z] 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:38.635 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.635 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.635 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.635 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.635 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.635 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.635 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.635 15:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.635 15:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.635 15:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.635 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.635 "name": "raid_bdev1", 00:14:38.636 "uuid": "e3c8e3e0-98f6-4900-94f6-767b137585e1", 00:14:38.636 "strip_size_kb": 0, 00:14:38.636 "state": "online", 00:14:38.636 "raid_level": "raid1", 00:14:38.636 "superblock": false, 00:14:38.636 "num_base_bdevs": 4, 00:14:38.636 "num_base_bdevs_discovered": 3, 00:14:38.636 "num_base_bdevs_operational": 3, 00:14:38.636 "base_bdevs_list": [ 00:14:38.636 { 00:14:38.636 "name": "spare", 00:14:38.636 "uuid": "bbd1cc4c-5901-541c-898b-4b104342bbcb", 00:14:38.636 "is_configured": true, 00:14:38.636 "data_offset": 0, 00:14:38.636 "data_size": 65536 00:14:38.636 }, 00:14:38.636 { 00:14:38.636 "name": null, 00:14:38.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.636 "is_configured": false, 00:14:38.636 "data_offset": 0, 00:14:38.636 "data_size": 65536 00:14:38.636 }, 00:14:38.636 { 00:14:38.636 "name": "BaseBdev3", 00:14:38.636 "uuid": "cb30c6eb-abb3-5cbd-8022-1485710e92fd", 00:14:38.636 "is_configured": true, 00:14:38.636 "data_offset": 0, 00:14:38.636 "data_size": 65536 00:14:38.636 }, 00:14:38.636 { 00:14:38.636 "name": "BaseBdev4", 00:14:38.636 "uuid": "039b0c8a-eda4-5e3d-843e-48743ab1f398", 00:14:38.636 "is_configured": true, 00:14:38.636 "data_offset": 0, 00:14:38.636 "data_size": 65536 00:14:38.636 } 00:14:38.636 ] 00:14:38.636 }' 00:14:38.636 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.636 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:38.636 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.636 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:38.636 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:38.636 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.636 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.636 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.636 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.636 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.636 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.636 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.636 15:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.636 15:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.636 15:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.636 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.636 "name": "raid_bdev1", 00:14:38.636 "uuid": "e3c8e3e0-98f6-4900-94f6-767b137585e1", 00:14:38.636 "strip_size_kb": 0, 00:14:38.636 "state": "online", 00:14:38.636 "raid_level": "raid1", 00:14:38.636 "superblock": false, 00:14:38.636 "num_base_bdevs": 4, 00:14:38.636 "num_base_bdevs_discovered": 3, 00:14:38.636 "num_base_bdevs_operational": 3, 00:14:38.636 "base_bdevs_list": [ 00:14:38.636 { 00:14:38.636 "name": "spare", 00:14:38.636 "uuid": "bbd1cc4c-5901-541c-898b-4b104342bbcb", 00:14:38.636 "is_configured": true, 00:14:38.636 "data_offset": 0, 00:14:38.636 "data_size": 65536 00:14:38.636 }, 00:14:38.636 { 00:14:38.636 "name": null, 00:14:38.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.636 "is_configured": false, 00:14:38.636 "data_offset": 0, 00:14:38.636 "data_size": 65536 00:14:38.636 }, 00:14:38.636 { 00:14:38.636 "name": "BaseBdev3", 00:14:38.636 "uuid": "cb30c6eb-abb3-5cbd-8022-1485710e92fd", 00:14:38.636 "is_configured": true, 00:14:38.636 "data_offset": 0, 00:14:38.636 "data_size": 65536 00:14:38.636 }, 00:14:38.636 { 00:14:38.636 "name": "BaseBdev4", 00:14:38.636 "uuid": "039b0c8a-eda4-5e3d-843e-48743ab1f398", 00:14:38.636 "is_configured": true, 00:14:38.636 "data_offset": 0, 00:14:38.636 "data_size": 65536 00:14:38.636 } 00:14:38.636 ] 00:14:38.636 }' 00:14:38.636 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.896 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.896 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.896 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.896 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:38.896 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.896 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.896 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.896 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.896 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.896 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.896 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.896 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.896 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.896 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.896 15:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.896 15:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.896 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.896 15:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.896 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.896 "name": "raid_bdev1", 00:14:38.896 "uuid": "e3c8e3e0-98f6-4900-94f6-767b137585e1", 00:14:38.896 "strip_size_kb": 0, 00:14:38.896 "state": "online", 00:14:38.896 "raid_level": "raid1", 00:14:38.897 "superblock": false, 00:14:38.897 "num_base_bdevs": 4, 00:14:38.897 "num_base_bdevs_discovered": 3, 00:14:38.897 "num_base_bdevs_operational": 3, 00:14:38.897 "base_bdevs_list": [ 00:14:38.897 { 00:14:38.897 "name": "spare", 00:14:38.897 "uuid": "bbd1cc4c-5901-541c-898b-4b104342bbcb", 00:14:38.897 "is_configured": true, 00:14:38.897 "data_offset": 0, 00:14:38.897 "data_size": 65536 00:14:38.897 }, 00:14:38.897 { 00:14:38.897 "name": null, 00:14:38.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.897 "is_configured": false, 00:14:38.897 "data_offset": 0, 00:14:38.897 "data_size": 65536 00:14:38.897 }, 00:14:38.897 { 00:14:38.897 "name": "BaseBdev3", 00:14:38.897 "uuid": "cb30c6eb-abb3-5cbd-8022-1485710e92fd", 00:14:38.897 "is_configured": true, 00:14:38.897 "data_offset": 0, 00:14:38.897 "data_size": 65536 00:14:38.897 }, 00:14:38.897 { 00:14:38.897 "name": "BaseBdev4", 00:14:38.897 "uuid": "039b0c8a-eda4-5e3d-843e-48743ab1f398", 00:14:38.897 "is_configured": true, 00:14:38.897 "data_offset": 0, 00:14:38.897 "data_size": 65536 00:14:38.897 } 00:14:38.897 ] 00:14:38.897 }' 00:14:38.897 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.897 15:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.157 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:39.157 15:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.157 15:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.157 [2024-11-25 15:41:37.778293] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.157 [2024-11-25 15:41:37.778331] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.417 00:14:39.417 Latency(us) 00:14:39.417 [2024-11-25T15:41:38.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.417 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:39.417 raid_bdev1 : 7.95 95.05 285.15 0.00 0.00 14281.27 296.92 115389.15 00:14:39.417 [2024-11-25T15:41:38.099Z] =================================================================================================================== 00:14:39.418 [2024-11-25T15:41:38.099Z] Total : 95.05 285.15 0.00 0.00 14281.27 296.92 115389.15 00:14:39.418 [2024-11-25 15:41:37.897957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.418 [2024-11-25 15:41:37.898002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.418 [2024-11-25 15:41:37.898114] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.418 [2024-11-25 15:41:37.898126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:39.418 { 00:14:39.418 "results": [ 00:14:39.418 { 00:14:39.418 "job": "raid_bdev1", 00:14:39.418 "core_mask": "0x1", 00:14:39.418 "workload": "randrw", 00:14:39.418 "percentage": 50, 00:14:39.418 "status": "finished", 00:14:39.418 "queue_depth": 2, 00:14:39.418 "io_size": 3145728, 00:14:39.418 "runtime": 7.95383, 00:14:39.418 "iops": 95.04854893805877, 00:14:39.418 "mibps": 285.1456468141763, 00:14:39.418 "io_failed": 0, 00:14:39.418 "io_timeout": 0, 00:14:39.418 "avg_latency_us": 14281.270026108454, 00:14:39.418 "min_latency_us": 296.91528384279474, 00:14:39.418 "max_latency_us": 115389.14934497817 00:14:39.418 } 00:14:39.418 ], 00:14:39.418 "core_count": 1 00:14:39.418 } 00:14:39.418 15:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.418 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.418 15:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.418 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:39.418 15:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.418 15:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.418 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:39.418 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:39.418 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:39.418 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:39.418 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:39.418 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:39.418 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:39.418 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:39.418 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:39.418 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:39.418 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:39.418 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:39.418 15:41:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:39.678 /dev/nbd0 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:39.678 1+0 records in 00:14:39.678 1+0 records out 00:14:39.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414232 s, 9.9 MB/s 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:39.678 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:39.679 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:39.679 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:39.679 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:39.679 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:39.679 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:39.679 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:39.679 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:39.679 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:39.679 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:39.679 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:39.679 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:39.679 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:39.679 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:39.939 /dev/nbd1 00:14:39.939 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:39.939 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:39.939 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:39.939 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:39.939 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:39.939 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:39.939 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:39.939 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:39.939 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:39.939 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:39.939 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:39.939 1+0 records in 00:14:39.939 1+0 records out 00:14:39.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530217 s, 7.7 MB/s 00:14:39.939 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.939 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:39.939 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.939 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:39.939 15:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:39.939 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:39.939 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:39.939 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:40.200 15:41:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:40.460 /dev/nbd1 00:14:40.460 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:40.460 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:40.460 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:40.460 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:40.460 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:40.460 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:40.460 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:40.460 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:40.460 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:40.460 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:40.460 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:40.460 1+0 records in 00:14:40.460 1+0 records out 00:14:40.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421048 s, 9.7 MB/s 00:14:40.460 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.460 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:40.460 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.460 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:40.460 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:40.460 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:40.460 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:40.460 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:40.719 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78397 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78397 ']' 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78397 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78397 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78397' 00:14:40.978 killing process with pid 78397 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78397 00:14:40.978 Received shutdown signal, test time was about 9.706507 seconds 00:14:40.978 00:14:40.978 Latency(us) 00:14:40.978 [2024-11-25T15:41:39.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.978 [2024-11-25T15:41:39.659Z] =================================================================================================================== 00:14:40.978 [2024-11-25T15:41:39.659Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:40.978 [2024-11-25 15:41:39.627360] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:40.978 15:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78397 00:14:41.546 [2024-11-25 15:41:40.028875] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:42.482 15:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:42.482 00:14:42.482 real 0m12.971s 00:14:42.482 user 0m16.316s 00:14:42.482 sys 0m1.756s 00:14:42.482 ************************************ 00:14:42.482 END TEST raid_rebuild_test_io 00:14:42.482 ************************************ 00:14:42.482 15:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:42.482 15:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.741 15:41:41 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:42.741 15:41:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:42.741 15:41:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:42.741 15:41:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:42.741 ************************************ 00:14:42.741 START TEST raid_rebuild_test_sb_io 00:14:42.741 ************************************ 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:42.741 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:42.742 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:42.742 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:42.742 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:42.742 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:42.742 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:42.742 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:42.742 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:42.742 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78807 00:14:42.742 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78807 00:14:42.742 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:42.742 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 78807 ']' 00:14:42.742 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.742 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:42.742 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.742 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:42.742 15:41:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.742 [2024-11-25 15:41:41.298602] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:14:42.742 [2024-11-25 15:41:41.298788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:42.742 Zero copy mechanism will not be used. 00:14:42.742 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78807 ] 00:14:43.001 [2024-11-25 15:41:41.471424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.001 [2024-11-25 15:41:41.567770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.259 [2024-11-25 15:41:41.755607] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.259 [2024-11-25 15:41:41.755733] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.518 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:43.518 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:43.518 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:43.518 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:43.518 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.518 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.518 BaseBdev1_malloc 00:14:43.518 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.518 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:43.518 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.518 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.518 [2024-11-25 15:41:42.152944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:43.518 [2024-11-25 15:41:42.153078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.518 [2024-11-25 15:41:42.153123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:43.518 [2024-11-25 15:41:42.153156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.518 [2024-11-25 15:41:42.155193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.518 [2024-11-25 15:41:42.155282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:43.518 BaseBdev1 00:14:43.518 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.518 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:43.518 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:43.518 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.518 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.778 BaseBdev2_malloc 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.778 [2024-11-25 15:41:42.206325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:43.778 [2024-11-25 15:41:42.206407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.778 [2024-11-25 15:41:42.206425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:43.778 [2024-11-25 15:41:42.206438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.778 [2024-11-25 15:41:42.208446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.778 [2024-11-25 15:41:42.208484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:43.778 BaseBdev2 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.778 BaseBdev3_malloc 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.778 [2024-11-25 15:41:42.274283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:43.778 [2024-11-25 15:41:42.274334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.778 [2024-11-25 15:41:42.274353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:43.778 [2024-11-25 15:41:42.274363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.778 [2024-11-25 15:41:42.276294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.778 [2024-11-25 15:41:42.276332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:43.778 BaseBdev3 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.778 BaseBdev4_malloc 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.778 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.778 [2024-11-25 15:41:42.327507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:43.779 [2024-11-25 15:41:42.327559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.779 [2024-11-25 15:41:42.327576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:43.779 [2024-11-25 15:41:42.327586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.779 [2024-11-25 15:41:42.329663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.779 [2024-11-25 15:41:42.329745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:43.779 BaseBdev4 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.779 spare_malloc 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.779 spare_delay 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.779 [2024-11-25 15:41:42.388448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:43.779 [2024-11-25 15:41:42.388499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.779 [2024-11-25 15:41:42.388533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:43.779 [2024-11-25 15:41:42.388544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.779 [2024-11-25 15:41:42.390514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.779 [2024-11-25 15:41:42.390596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:43.779 spare 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.779 [2024-11-25 15:41:42.400478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.779 [2024-11-25 15:41:42.402238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.779 [2024-11-25 15:41:42.402300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:43.779 [2024-11-25 15:41:42.402350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:43.779 [2024-11-25 15:41:42.402528] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:43.779 [2024-11-25 15:41:42.402544] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:43.779 [2024-11-25 15:41:42.402762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:43.779 [2024-11-25 15:41:42.402929] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:43.779 [2024-11-25 15:41:42.402939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:43.779 [2024-11-25 15:41:42.403117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.779 "name": "raid_bdev1", 00:14:43.779 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:43.779 "strip_size_kb": 0, 00:14:43.779 "state": "online", 00:14:43.779 "raid_level": "raid1", 00:14:43.779 "superblock": true, 00:14:43.779 "num_base_bdevs": 4, 00:14:43.779 "num_base_bdevs_discovered": 4, 00:14:43.779 "num_base_bdevs_operational": 4, 00:14:43.779 "base_bdevs_list": [ 00:14:43.779 { 00:14:43.779 "name": "BaseBdev1", 00:14:43.779 "uuid": "21ede5ed-3d7f-5e38-a588-f277b3dfdaeb", 00:14:43.779 "is_configured": true, 00:14:43.779 "data_offset": 2048, 00:14:43.779 "data_size": 63488 00:14:43.779 }, 00:14:43.779 { 00:14:43.779 "name": "BaseBdev2", 00:14:43.779 "uuid": "23201d2f-c3b1-5fcc-9fc0-fdd6f6a45880", 00:14:43.779 "is_configured": true, 00:14:43.779 "data_offset": 2048, 00:14:43.779 "data_size": 63488 00:14:43.779 }, 00:14:43.779 { 00:14:43.779 "name": "BaseBdev3", 00:14:43.779 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:43.779 "is_configured": true, 00:14:43.779 "data_offset": 2048, 00:14:43.779 "data_size": 63488 00:14:43.779 }, 00:14:43.779 { 00:14:43.779 "name": "BaseBdev4", 00:14:43.779 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:43.779 "is_configured": true, 00:14:43.779 "data_offset": 2048, 00:14:43.779 "data_size": 63488 00:14:43.779 } 00:14:43.779 ] 00:14:43.779 }' 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.779 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.348 [2024-11-25 15:41:42.848003] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.348 [2024-11-25 15:41:42.935509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.348 "name": "raid_bdev1", 00:14:44.348 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:44.348 "strip_size_kb": 0, 00:14:44.348 "state": "online", 00:14:44.348 "raid_level": "raid1", 00:14:44.348 "superblock": true, 00:14:44.348 "num_base_bdevs": 4, 00:14:44.348 "num_base_bdevs_discovered": 3, 00:14:44.348 "num_base_bdevs_operational": 3, 00:14:44.348 "base_bdevs_list": [ 00:14:44.348 { 00:14:44.348 "name": null, 00:14:44.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.348 "is_configured": false, 00:14:44.348 "data_offset": 0, 00:14:44.348 "data_size": 63488 00:14:44.348 }, 00:14:44.348 { 00:14:44.348 "name": "BaseBdev2", 00:14:44.348 "uuid": "23201d2f-c3b1-5fcc-9fc0-fdd6f6a45880", 00:14:44.348 "is_configured": true, 00:14:44.348 "data_offset": 2048, 00:14:44.348 "data_size": 63488 00:14:44.348 }, 00:14:44.348 { 00:14:44.348 "name": "BaseBdev3", 00:14:44.348 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:44.348 "is_configured": true, 00:14:44.348 "data_offset": 2048, 00:14:44.348 "data_size": 63488 00:14:44.348 }, 00:14:44.348 { 00:14:44.348 "name": "BaseBdev4", 00:14:44.348 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:44.348 "is_configured": true, 00:14:44.348 "data_offset": 2048, 00:14:44.348 "data_size": 63488 00:14:44.348 } 00:14:44.348 ] 00:14:44.348 }' 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.348 15:41:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.608 [2024-11-25 15:41:43.034526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:44.608 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:44.608 Zero copy mechanism will not be used. 00:14:44.608 Running I/O for 60 seconds... 00:14:44.870 15:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:44.870 15:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.870 15:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.870 [2024-11-25 15:41:43.398165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:44.870 15:41:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.870 [2024-11-25 15:41:43.449489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:44.870 15:41:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:44.870 [2024-11-25 15:41:43.451445] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:45.135 [2024-11-25 15:41:43.566711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:45.135 [2024-11-25 15:41:43.568266] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:45.135 [2024-11-25 15:41:43.786778] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:45.135 [2024-11-25 15:41:43.787170] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:45.653 171.00 IOPS, 513.00 MiB/s [2024-11-25T15:41:44.334Z] [2024-11-25 15:41:44.133163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:45.913 [2024-11-25 15:41:44.336277] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:45.913 [2024-11-25 15:41:44.336564] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:45.913 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.913 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.913 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.913 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.913 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.913 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.913 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.913 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.913 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.913 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.913 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.913 "name": "raid_bdev1", 00:14:45.913 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:45.913 "strip_size_kb": 0, 00:14:45.913 "state": "online", 00:14:45.913 "raid_level": "raid1", 00:14:45.913 "superblock": true, 00:14:45.913 "num_base_bdevs": 4, 00:14:45.913 "num_base_bdevs_discovered": 4, 00:14:45.913 "num_base_bdevs_operational": 4, 00:14:45.913 "process": { 00:14:45.913 "type": "rebuild", 00:14:45.913 "target": "spare", 00:14:45.913 "progress": { 00:14:45.913 "blocks": 12288, 00:14:45.913 "percent": 19 00:14:45.913 } 00:14:45.913 }, 00:14:45.913 "base_bdevs_list": [ 00:14:45.913 { 00:14:45.913 "name": "spare", 00:14:45.913 "uuid": "3a8b7c39-51a4-55f1-9b09-ecb5c19f9371", 00:14:45.913 "is_configured": true, 00:14:45.913 "data_offset": 2048, 00:14:45.913 "data_size": 63488 00:14:45.913 }, 00:14:45.913 { 00:14:45.913 "name": "BaseBdev2", 00:14:45.913 "uuid": "23201d2f-c3b1-5fcc-9fc0-fdd6f6a45880", 00:14:45.913 "is_configured": true, 00:14:45.913 "data_offset": 2048, 00:14:45.913 "data_size": 63488 00:14:45.913 }, 00:14:45.913 { 00:14:45.913 "name": "BaseBdev3", 00:14:45.913 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:45.913 "is_configured": true, 00:14:45.913 "data_offset": 2048, 00:14:45.913 "data_size": 63488 00:14:45.913 }, 00:14:45.913 { 00:14:45.913 "name": "BaseBdev4", 00:14:45.913 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:45.913 "is_configured": true, 00:14:45.913 "data_offset": 2048, 00:14:45.913 "data_size": 63488 00:14:45.913 } 00:14:45.913 ] 00:14:45.913 }' 00:14:45.913 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.913 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.913 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.913 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.913 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:45.913 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.913 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.173 [2024-11-25 15:41:44.593208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.173 [2024-11-25 15:41:44.686567] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:46.173 [2024-11-25 15:41:44.686945] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:46.173 [2024-11-25 15:41:44.693377] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:46.173 [2024-11-25 15:41:44.710070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.173 [2024-11-25 15:41:44.710131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.173 [2024-11-25 15:41:44.710144] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:46.173 [2024-11-25 15:41:44.738355] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:46.173 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.173 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:46.173 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.173 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.173 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.173 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.173 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.173 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.173 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.173 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.173 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.173 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.173 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.173 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.173 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.173 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.173 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.173 "name": "raid_bdev1", 00:14:46.173 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:46.173 "strip_size_kb": 0, 00:14:46.173 "state": "online", 00:14:46.173 "raid_level": "raid1", 00:14:46.173 "superblock": true, 00:14:46.173 "num_base_bdevs": 4, 00:14:46.173 "num_base_bdevs_discovered": 3, 00:14:46.173 "num_base_bdevs_operational": 3, 00:14:46.173 "base_bdevs_list": [ 00:14:46.173 { 00:14:46.173 "name": null, 00:14:46.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.173 "is_configured": false, 00:14:46.173 "data_offset": 0, 00:14:46.173 "data_size": 63488 00:14:46.173 }, 00:14:46.173 { 00:14:46.173 "name": "BaseBdev2", 00:14:46.173 "uuid": "23201d2f-c3b1-5fcc-9fc0-fdd6f6a45880", 00:14:46.173 "is_configured": true, 00:14:46.173 "data_offset": 2048, 00:14:46.173 "data_size": 63488 00:14:46.173 }, 00:14:46.173 { 00:14:46.173 "name": "BaseBdev3", 00:14:46.173 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:46.173 "is_configured": true, 00:14:46.173 "data_offset": 2048, 00:14:46.173 "data_size": 63488 00:14:46.173 }, 00:14:46.173 { 00:14:46.173 "name": "BaseBdev4", 00:14:46.173 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:46.173 "is_configured": true, 00:14:46.173 "data_offset": 2048, 00:14:46.173 "data_size": 63488 00:14:46.173 } 00:14:46.173 ] 00:14:46.173 }' 00:14:46.173 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.173 15:41:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.692 157.00 IOPS, 471.00 MiB/s [2024-11-25T15:41:45.374Z] 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:46.693 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.693 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:46.693 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:46.693 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.693 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.693 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.693 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.693 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.693 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.693 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.693 "name": "raid_bdev1", 00:14:46.693 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:46.693 "strip_size_kb": 0, 00:14:46.693 "state": "online", 00:14:46.693 "raid_level": "raid1", 00:14:46.693 "superblock": true, 00:14:46.693 "num_base_bdevs": 4, 00:14:46.693 "num_base_bdevs_discovered": 3, 00:14:46.693 "num_base_bdevs_operational": 3, 00:14:46.693 "base_bdevs_list": [ 00:14:46.693 { 00:14:46.693 "name": null, 00:14:46.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.693 "is_configured": false, 00:14:46.693 "data_offset": 0, 00:14:46.693 "data_size": 63488 00:14:46.693 }, 00:14:46.693 { 00:14:46.693 "name": "BaseBdev2", 00:14:46.693 "uuid": "23201d2f-c3b1-5fcc-9fc0-fdd6f6a45880", 00:14:46.693 "is_configured": true, 00:14:46.693 "data_offset": 2048, 00:14:46.693 "data_size": 63488 00:14:46.693 }, 00:14:46.693 { 00:14:46.693 "name": "BaseBdev3", 00:14:46.693 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:46.693 "is_configured": true, 00:14:46.693 "data_offset": 2048, 00:14:46.693 "data_size": 63488 00:14:46.693 }, 00:14:46.693 { 00:14:46.693 "name": "BaseBdev4", 00:14:46.693 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:46.693 "is_configured": true, 00:14:46.693 "data_offset": 2048, 00:14:46.693 "data_size": 63488 00:14:46.693 } 00:14:46.693 ] 00:14:46.693 }' 00:14:46.693 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.693 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.693 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.693 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.693 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:46.693 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.693 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.693 [2024-11-25 15:41:45.322461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:46.693 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.693 15:41:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:46.693 [2024-11-25 15:41:45.361310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:46.693 [2024-11-25 15:41:45.363191] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:46.953 [2024-11-25 15:41:45.469791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:46.953 [2024-11-25 15:41:45.471294] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:47.212 [2024-11-25 15:41:45.677072] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:47.212 [2024-11-25 15:41:45.677388] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:47.471 [2024-11-25 15:41:46.009610] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:47.471 154.00 IOPS, 462.00 MiB/s [2024-11-25T15:41:46.152Z] [2024-11-25 15:41:46.139256] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:47.471 [2024-11-25 15:41:46.139588] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:47.732 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.732 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.732 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.732 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.732 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.732 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.732 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.732 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.732 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.732 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.732 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.732 "name": "raid_bdev1", 00:14:47.732 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:47.732 "strip_size_kb": 0, 00:14:47.732 "state": "online", 00:14:47.732 "raid_level": "raid1", 00:14:47.732 "superblock": true, 00:14:47.732 "num_base_bdevs": 4, 00:14:47.732 "num_base_bdevs_discovered": 4, 00:14:47.732 "num_base_bdevs_operational": 4, 00:14:47.732 "process": { 00:14:47.732 "type": "rebuild", 00:14:47.732 "target": "spare", 00:14:47.732 "progress": { 00:14:47.732 "blocks": 10240, 00:14:47.732 "percent": 16 00:14:47.732 } 00:14:47.732 }, 00:14:47.732 "base_bdevs_list": [ 00:14:47.732 { 00:14:47.732 "name": "spare", 00:14:47.732 "uuid": "3a8b7c39-51a4-55f1-9b09-ecb5c19f9371", 00:14:47.732 "is_configured": true, 00:14:47.732 "data_offset": 2048, 00:14:47.732 "data_size": 63488 00:14:47.732 }, 00:14:47.732 { 00:14:47.732 "name": "BaseBdev2", 00:14:47.732 "uuid": "23201d2f-c3b1-5fcc-9fc0-fdd6f6a45880", 00:14:47.732 "is_configured": true, 00:14:47.732 "data_offset": 2048, 00:14:47.732 "data_size": 63488 00:14:47.732 }, 00:14:47.732 { 00:14:47.732 "name": "BaseBdev3", 00:14:47.732 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:47.732 "is_configured": true, 00:14:47.732 "data_offset": 2048, 00:14:47.732 "data_size": 63488 00:14:47.732 }, 00:14:47.732 { 00:14:47.732 "name": "BaseBdev4", 00:14:47.732 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:47.732 "is_configured": true, 00:14:47.732 "data_offset": 2048, 00:14:47.732 "data_size": 63488 00:14:47.732 } 00:14:47.732 ] 00:14:47.732 }' 00:14:47.992 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.992 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.992 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.992 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.992 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:47.992 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:47.992 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:47.992 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:47.992 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:47.992 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:47.992 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:47.992 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.992 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.992 [2024-11-25 15:41:46.492094] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:47.992 [2024-11-25 15:41:46.495899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:48.252 [2024-11-25 15:41:46.702401] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:48.252 [2024-11-25 15:41:46.703238] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:48.252 [2024-11-25 15:41:46.905355] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:48.252 [2024-11-25 15:41:46.905449] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:48.252 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.252 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:48.252 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:48.252 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.252 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.252 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.252 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.252 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.252 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.252 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.252 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.252 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.512 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.512 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.512 "name": "raid_bdev1", 00:14:48.512 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:48.512 "strip_size_kb": 0, 00:14:48.512 "state": "online", 00:14:48.512 "raid_level": "raid1", 00:14:48.512 "superblock": true, 00:14:48.512 "num_base_bdevs": 4, 00:14:48.512 "num_base_bdevs_discovered": 3, 00:14:48.512 "num_base_bdevs_operational": 3, 00:14:48.512 "process": { 00:14:48.512 "type": "rebuild", 00:14:48.512 "target": "spare", 00:14:48.512 "progress": { 00:14:48.512 "blocks": 16384, 00:14:48.512 "percent": 25 00:14:48.512 } 00:14:48.512 }, 00:14:48.512 "base_bdevs_list": [ 00:14:48.512 { 00:14:48.512 "name": "spare", 00:14:48.512 "uuid": "3a8b7c39-51a4-55f1-9b09-ecb5c19f9371", 00:14:48.512 "is_configured": true, 00:14:48.512 "data_offset": 2048, 00:14:48.512 "data_size": 63488 00:14:48.512 }, 00:14:48.512 { 00:14:48.512 "name": null, 00:14:48.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.512 "is_configured": false, 00:14:48.512 "data_offset": 0, 00:14:48.512 "data_size": 63488 00:14:48.512 }, 00:14:48.512 { 00:14:48.512 "name": "BaseBdev3", 00:14:48.512 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:48.512 "is_configured": true, 00:14:48.512 "data_offset": 2048, 00:14:48.512 "data_size": 63488 00:14:48.512 }, 00:14:48.512 { 00:14:48.512 "name": "BaseBdev4", 00:14:48.512 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:48.512 "is_configured": true, 00:14:48.512 "data_offset": 2048, 00:14:48.512 "data_size": 63488 00:14:48.512 } 00:14:48.512 ] 00:14:48.512 }' 00:14:48.512 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.512 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.512 15:41:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.512 15:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.512 15:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=480 00:14:48.512 15:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:48.512 15:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.512 15:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.512 15:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.512 15:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.512 15:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.512 15:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.512 15:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.512 15:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.512 15:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.512 15:41:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.512 134.75 IOPS, 404.25 MiB/s [2024-11-25T15:41:47.193Z] 15:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.512 "name": "raid_bdev1", 00:14:48.512 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:48.512 "strip_size_kb": 0, 00:14:48.512 "state": "online", 00:14:48.512 "raid_level": "raid1", 00:14:48.512 "superblock": true, 00:14:48.512 "num_base_bdevs": 4, 00:14:48.512 "num_base_bdevs_discovered": 3, 00:14:48.512 "num_base_bdevs_operational": 3, 00:14:48.512 "process": { 00:14:48.513 "type": "rebuild", 00:14:48.513 "target": "spare", 00:14:48.513 "progress": { 00:14:48.513 "blocks": 16384, 00:14:48.513 "percent": 25 00:14:48.513 } 00:14:48.513 }, 00:14:48.513 "base_bdevs_list": [ 00:14:48.513 { 00:14:48.513 "name": "spare", 00:14:48.513 "uuid": "3a8b7c39-51a4-55f1-9b09-ecb5c19f9371", 00:14:48.513 "is_configured": true, 00:14:48.513 "data_offset": 2048, 00:14:48.513 "data_size": 63488 00:14:48.513 }, 00:14:48.513 { 00:14:48.513 "name": null, 00:14:48.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.513 "is_configured": false, 00:14:48.513 "data_offset": 0, 00:14:48.513 "data_size": 63488 00:14:48.513 }, 00:14:48.513 { 00:14:48.513 "name": "BaseBdev3", 00:14:48.513 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:48.513 "is_configured": true, 00:14:48.513 "data_offset": 2048, 00:14:48.513 "data_size": 63488 00:14:48.513 }, 00:14:48.513 { 00:14:48.513 "name": "BaseBdev4", 00:14:48.513 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:48.513 "is_configured": true, 00:14:48.513 "data_offset": 2048, 00:14:48.513 "data_size": 63488 00:14:48.513 } 00:14:48.513 ] 00:14:48.513 }' 00:14:48.513 15:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.513 15:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.513 15:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.513 15:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.513 15:41:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:48.513 [2024-11-25 15:41:47.164544] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:48.774 [2024-11-25 15:41:47.284520] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:49.342 [2024-11-25 15:41:47.952359] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:49.602 121.20 IOPS, 363.60 MiB/s [2024-11-25T15:41:48.283Z] 15:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.602 15:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.602 15:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.602 15:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.602 15:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.602 15:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.602 15:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.602 15:41:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.602 15:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.602 15:41:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.602 15:41:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.602 15:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.602 "name": "raid_bdev1", 00:14:49.602 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:49.602 "strip_size_kb": 0, 00:14:49.602 "state": "online", 00:14:49.602 "raid_level": "raid1", 00:14:49.602 "superblock": true, 00:14:49.602 "num_base_bdevs": 4, 00:14:49.602 "num_base_bdevs_discovered": 3, 00:14:49.602 "num_base_bdevs_operational": 3, 00:14:49.602 "process": { 00:14:49.602 "type": "rebuild", 00:14:49.602 "target": "spare", 00:14:49.602 "progress": { 00:14:49.602 "blocks": 32768, 00:14:49.602 "percent": 51 00:14:49.602 } 00:14:49.602 }, 00:14:49.602 "base_bdevs_list": [ 00:14:49.602 { 00:14:49.602 "name": "spare", 00:14:49.602 "uuid": "3a8b7c39-51a4-55f1-9b09-ecb5c19f9371", 00:14:49.602 "is_configured": true, 00:14:49.602 "data_offset": 2048, 00:14:49.602 "data_size": 63488 00:14:49.602 }, 00:14:49.602 { 00:14:49.602 "name": null, 00:14:49.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.602 "is_configured": false, 00:14:49.602 "data_offset": 0, 00:14:49.602 "data_size": 63488 00:14:49.602 }, 00:14:49.602 { 00:14:49.602 "name": "BaseBdev3", 00:14:49.602 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:49.602 "is_configured": true, 00:14:49.602 "data_offset": 2048, 00:14:49.602 "data_size": 63488 00:14:49.602 }, 00:14:49.602 { 00:14:49.602 "name": "BaseBdev4", 00:14:49.602 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:49.602 "is_configured": true, 00:14:49.602 "data_offset": 2048, 00:14:49.602 "data_size": 63488 00:14:49.602 } 00:14:49.602 ] 00:14:49.602 }' 00:14:49.602 15:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.602 15:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.602 15:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.862 15:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.862 15:41:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:49.862 [2024-11-25 15:41:48.395656] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:50.169 [2024-11-25 15:41:48.730655] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:50.431 [2024-11-25 15:41:48.945397] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:50.697 108.17 IOPS, 324.50 MiB/s [2024-11-25T15:41:49.378Z] [2024-11-25 15:41:49.262907] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:50.697 15:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:50.697 15:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.697 15:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.697 15:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.697 15:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.698 15:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.698 15:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.698 15:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.698 15:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.698 15:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.698 15:41:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.698 15:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.698 "name": "raid_bdev1", 00:14:50.698 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:50.698 "strip_size_kb": 0, 00:14:50.698 "state": "online", 00:14:50.698 "raid_level": "raid1", 00:14:50.698 "superblock": true, 00:14:50.698 "num_base_bdevs": 4, 00:14:50.698 "num_base_bdevs_discovered": 3, 00:14:50.698 "num_base_bdevs_operational": 3, 00:14:50.698 "process": { 00:14:50.698 "type": "rebuild", 00:14:50.698 "target": "spare", 00:14:50.698 "progress": { 00:14:50.698 "blocks": 53248, 00:14:50.698 "percent": 83 00:14:50.698 } 00:14:50.698 }, 00:14:50.698 "base_bdevs_list": [ 00:14:50.698 { 00:14:50.698 "name": "spare", 00:14:50.698 "uuid": "3a8b7c39-51a4-55f1-9b09-ecb5c19f9371", 00:14:50.698 "is_configured": true, 00:14:50.698 "data_offset": 2048, 00:14:50.698 "data_size": 63488 00:14:50.698 }, 00:14:50.698 { 00:14:50.698 "name": null, 00:14:50.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.698 "is_configured": false, 00:14:50.698 "data_offset": 0, 00:14:50.698 "data_size": 63488 00:14:50.698 }, 00:14:50.698 { 00:14:50.698 "name": "BaseBdev3", 00:14:50.698 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:50.698 "is_configured": true, 00:14:50.698 "data_offset": 2048, 00:14:50.698 "data_size": 63488 00:14:50.698 }, 00:14:50.698 { 00:14:50.698 "name": "BaseBdev4", 00:14:50.698 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:50.698 "is_configured": true, 00:14:50.698 "data_offset": 2048, 00:14:50.698 "data_size": 63488 00:14:50.698 } 00:14:50.698 ] 00:14:50.698 }' 00:14:50.698 15:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.976 15:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.976 15:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.976 15:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.976 15:41:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:50.976 [2024-11-25 15:41:49.586904] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:51.356 [2024-11-25 15:41:49.909432] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:51.356 [2024-11-25 15:41:50.012249] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:51.356 [2024-11-25 15:41:50.014714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.876 97.71 IOPS, 293.14 MiB/s [2024-11-25T15:41:50.557Z] 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:51.876 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.876 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.876 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.876 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.876 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.876 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.876 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.876 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.876 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.876 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.876 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.876 "name": "raid_bdev1", 00:14:51.876 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:51.876 "strip_size_kb": 0, 00:14:51.876 "state": "online", 00:14:51.876 "raid_level": "raid1", 00:14:51.876 "superblock": true, 00:14:51.876 "num_base_bdevs": 4, 00:14:51.876 "num_base_bdevs_discovered": 3, 00:14:51.876 "num_base_bdevs_operational": 3, 00:14:51.876 "base_bdevs_list": [ 00:14:51.876 { 00:14:51.876 "name": "spare", 00:14:51.876 "uuid": "3a8b7c39-51a4-55f1-9b09-ecb5c19f9371", 00:14:51.876 "is_configured": true, 00:14:51.876 "data_offset": 2048, 00:14:51.876 "data_size": 63488 00:14:51.876 }, 00:14:51.876 { 00:14:51.876 "name": null, 00:14:51.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.876 "is_configured": false, 00:14:51.876 "data_offset": 0, 00:14:51.876 "data_size": 63488 00:14:51.876 }, 00:14:51.876 { 00:14:51.876 "name": "BaseBdev3", 00:14:51.876 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:51.876 "is_configured": true, 00:14:51.876 "data_offset": 2048, 00:14:51.876 "data_size": 63488 00:14:51.876 }, 00:14:51.876 { 00:14:51.876 "name": "BaseBdev4", 00:14:51.876 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:51.877 "is_configured": true, 00:14:51.877 "data_offset": 2048, 00:14:51.877 "data_size": 63488 00:14:51.877 } 00:14:51.877 ] 00:14:51.877 }' 00:14:51.877 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.877 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:51.877 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.137 "name": "raid_bdev1", 00:14:52.137 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:52.137 "strip_size_kb": 0, 00:14:52.137 "state": "online", 00:14:52.137 "raid_level": "raid1", 00:14:52.137 "superblock": true, 00:14:52.137 "num_base_bdevs": 4, 00:14:52.137 "num_base_bdevs_discovered": 3, 00:14:52.137 "num_base_bdevs_operational": 3, 00:14:52.137 "base_bdevs_list": [ 00:14:52.137 { 00:14:52.137 "name": "spare", 00:14:52.137 "uuid": "3a8b7c39-51a4-55f1-9b09-ecb5c19f9371", 00:14:52.137 "is_configured": true, 00:14:52.137 "data_offset": 2048, 00:14:52.137 "data_size": 63488 00:14:52.137 }, 00:14:52.137 { 00:14:52.137 "name": null, 00:14:52.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.137 "is_configured": false, 00:14:52.137 "data_offset": 0, 00:14:52.137 "data_size": 63488 00:14:52.137 }, 00:14:52.137 { 00:14:52.137 "name": "BaseBdev3", 00:14:52.137 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:52.137 "is_configured": true, 00:14:52.137 "data_offset": 2048, 00:14:52.137 "data_size": 63488 00:14:52.137 }, 00:14:52.137 { 00:14:52.137 "name": "BaseBdev4", 00:14:52.137 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:52.137 "is_configured": true, 00:14:52.137 "data_offset": 2048, 00:14:52.137 "data_size": 63488 00:14:52.137 } 00:14:52.137 ] 00:14:52.137 }' 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.137 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.137 "name": "raid_bdev1", 00:14:52.137 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:52.137 "strip_size_kb": 0, 00:14:52.137 "state": "online", 00:14:52.137 "raid_level": "raid1", 00:14:52.137 "superblock": true, 00:14:52.137 "num_base_bdevs": 4, 00:14:52.137 "num_base_bdevs_discovered": 3, 00:14:52.137 "num_base_bdevs_operational": 3, 00:14:52.137 "base_bdevs_list": [ 00:14:52.137 { 00:14:52.137 "name": "spare", 00:14:52.137 "uuid": "3a8b7c39-51a4-55f1-9b09-ecb5c19f9371", 00:14:52.137 "is_configured": true, 00:14:52.137 "data_offset": 2048, 00:14:52.137 "data_size": 63488 00:14:52.137 }, 00:14:52.137 { 00:14:52.137 "name": null, 00:14:52.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.137 "is_configured": false, 00:14:52.137 "data_offset": 0, 00:14:52.137 "data_size": 63488 00:14:52.137 }, 00:14:52.137 { 00:14:52.137 "name": "BaseBdev3", 00:14:52.137 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:52.137 "is_configured": true, 00:14:52.137 "data_offset": 2048, 00:14:52.137 "data_size": 63488 00:14:52.137 }, 00:14:52.137 { 00:14:52.138 "name": "BaseBdev4", 00:14:52.138 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:52.138 "is_configured": true, 00:14:52.138 "data_offset": 2048, 00:14:52.138 "data_size": 63488 00:14:52.138 } 00:14:52.138 ] 00:14:52.138 }' 00:14:52.138 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.138 15:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.658 91.75 IOPS, 275.25 MiB/s [2024-11-25T15:41:51.339Z] 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.658 [2024-11-25 15:41:51.164628] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:52.658 [2024-11-25 15:41:51.164661] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.658 00:14:52.658 Latency(us) 00:14:52.658 [2024-11-25T15:41:51.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.658 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:52.658 raid_bdev1 : 8.18 90.47 271.40 0.00 0.00 15198.86 318.38 109894.43 00:14:52.658 [2024-11-25T15:41:51.339Z] =================================================================================================================== 00:14:52.658 [2024-11-25T15:41:51.339Z] Total : 90.47 271.40 0.00 0.00 15198.86 318.38 109894.43 00:14:52.658 { 00:14:52.658 "results": [ 00:14:52.658 { 00:14:52.658 "job": "raid_bdev1", 00:14:52.658 "core_mask": "0x1", 00:14:52.658 "workload": "randrw", 00:14:52.658 "percentage": 50, 00:14:52.658 "status": "finished", 00:14:52.658 "queue_depth": 2, 00:14:52.658 "io_size": 3145728, 00:14:52.658 "runtime": 8.179702, 00:14:52.658 "iops": 90.46784344955354, 00:14:52.658 "mibps": 271.40353034866064, 00:14:52.658 "io_failed": 0, 00:14:52.658 "io_timeout": 0, 00:14:52.658 "avg_latency_us": 15198.86351469373, 00:14:52.658 "min_latency_us": 318.37903930131006, 00:14:52.658 "max_latency_us": 109894.42794759825 00:14:52.658 } 00:14:52.658 ], 00:14:52.658 "core_count": 1 00:14:52.658 } 00:14:52.658 [2024-11-25 15:41:51.220169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.658 [2024-11-25 15:41:51.220213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.658 [2024-11-25 15:41:51.220303] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.658 [2024-11-25 15:41:51.220313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:52.658 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:52.918 /dev/nbd0 00:14:52.918 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:52.918 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:52.918 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:52.918 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:52.918 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:52.918 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:52.918 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:52.918 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:52.918 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:52.918 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:52.918 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:52.918 1+0 records in 00:14:52.918 1+0 records out 00:14:52.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215496 s, 19.0 MB/s 00:14:52.918 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.918 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:52.918 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.918 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:52.918 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:52.918 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:52.919 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:52.919 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:52.919 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:52.919 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:52.919 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:52.919 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:52.919 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:52.919 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:52.919 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:52.919 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:52.919 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:52.919 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:52.919 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:52.919 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:52.919 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:52.919 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:53.179 /dev/nbd1 00:14:53.179 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:53.179 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:53.179 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:53.179 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:53.179 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:53.179 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:53.179 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:53.179 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:53.179 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:53.179 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:53.179 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:53.179 1+0 records in 00:14:53.179 1+0 records out 00:14:53.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403519 s, 10.2 MB/s 00:14:53.179 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.179 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:53.179 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.179 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:53.179 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:53.179 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:53.179 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.179 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:53.439 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:53.439 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:53.439 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:53.439 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:53.439 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:53.439 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:53.439 15:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:53.699 /dev/nbd1 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:53.699 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:53.959 1+0 records in 00:14:53.959 1+0 records out 00:14:53.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396399 s, 10.3 MB/s 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:53.959 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:54.220 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:54.220 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:54.220 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:54.220 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:54.220 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:54.220 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:54.220 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:54.220 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:54.220 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:54.220 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:54.220 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:54.220 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:54.220 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:54.220 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:54.220 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.480 [2024-11-25 15:41:52.936732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:54.480 [2024-11-25 15:41:52.936850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.480 [2024-11-25 15:41:52.936896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:54.480 [2024-11-25 15:41:52.936931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.480 [2024-11-25 15:41:52.939112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.480 [2024-11-25 15:41:52.939195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:54.480 [2024-11-25 15:41:52.939307] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:54.480 [2024-11-25 15:41:52.939388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:54.480 [2024-11-25 15:41:52.939593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.480 [2024-11-25 15:41:52.939726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:54.480 spare 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.480 15:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.480 [2024-11-25 15:41:53.039649] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:54.480 [2024-11-25 15:41:53.039708] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:54.480 [2024-11-25 15:41:53.040020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:54.480 [2024-11-25 15:41:53.040225] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:54.480 [2024-11-25 15:41:53.040268] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:54.480 [2024-11-25 15:41:53.040471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.480 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.480 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:54.480 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.480 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.480 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.480 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.480 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.480 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.480 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.480 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.480 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.480 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.480 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.480 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.481 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.481 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.481 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.481 "name": "raid_bdev1", 00:14:54.481 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:54.481 "strip_size_kb": 0, 00:14:54.481 "state": "online", 00:14:54.481 "raid_level": "raid1", 00:14:54.481 "superblock": true, 00:14:54.481 "num_base_bdevs": 4, 00:14:54.481 "num_base_bdevs_discovered": 3, 00:14:54.481 "num_base_bdevs_operational": 3, 00:14:54.481 "base_bdevs_list": [ 00:14:54.481 { 00:14:54.481 "name": "spare", 00:14:54.481 "uuid": "3a8b7c39-51a4-55f1-9b09-ecb5c19f9371", 00:14:54.481 "is_configured": true, 00:14:54.481 "data_offset": 2048, 00:14:54.481 "data_size": 63488 00:14:54.481 }, 00:14:54.481 { 00:14:54.481 "name": null, 00:14:54.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.481 "is_configured": false, 00:14:54.481 "data_offset": 2048, 00:14:54.481 "data_size": 63488 00:14:54.481 }, 00:14:54.481 { 00:14:54.481 "name": "BaseBdev3", 00:14:54.481 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:54.481 "is_configured": true, 00:14:54.481 "data_offset": 2048, 00:14:54.481 "data_size": 63488 00:14:54.481 }, 00:14:54.481 { 00:14:54.481 "name": "BaseBdev4", 00:14:54.481 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:54.481 "is_configured": true, 00:14:54.481 "data_offset": 2048, 00:14:54.481 "data_size": 63488 00:14:54.481 } 00:14:54.481 ] 00:14:54.481 }' 00:14:54.481 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.481 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.051 "name": "raid_bdev1", 00:14:55.051 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:55.051 "strip_size_kb": 0, 00:14:55.051 "state": "online", 00:14:55.051 "raid_level": "raid1", 00:14:55.051 "superblock": true, 00:14:55.051 "num_base_bdevs": 4, 00:14:55.051 "num_base_bdevs_discovered": 3, 00:14:55.051 "num_base_bdevs_operational": 3, 00:14:55.051 "base_bdevs_list": [ 00:14:55.051 { 00:14:55.051 "name": "spare", 00:14:55.051 "uuid": "3a8b7c39-51a4-55f1-9b09-ecb5c19f9371", 00:14:55.051 "is_configured": true, 00:14:55.051 "data_offset": 2048, 00:14:55.051 "data_size": 63488 00:14:55.051 }, 00:14:55.051 { 00:14:55.051 "name": null, 00:14:55.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.051 "is_configured": false, 00:14:55.051 "data_offset": 2048, 00:14:55.051 "data_size": 63488 00:14:55.051 }, 00:14:55.051 { 00:14:55.051 "name": "BaseBdev3", 00:14:55.051 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:55.051 "is_configured": true, 00:14:55.051 "data_offset": 2048, 00:14:55.051 "data_size": 63488 00:14:55.051 }, 00:14:55.051 { 00:14:55.051 "name": "BaseBdev4", 00:14:55.051 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:55.051 "is_configured": true, 00:14:55.051 "data_offset": 2048, 00:14:55.051 "data_size": 63488 00:14:55.051 } 00:14:55.051 ] 00:14:55.051 }' 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.051 [2024-11-25 15:41:53.699598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.051 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.052 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.052 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:55.052 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.052 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.052 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.052 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.052 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.052 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.052 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.052 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.052 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.311 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.311 "name": "raid_bdev1", 00:14:55.311 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:55.311 "strip_size_kb": 0, 00:14:55.311 "state": "online", 00:14:55.311 "raid_level": "raid1", 00:14:55.311 "superblock": true, 00:14:55.311 "num_base_bdevs": 4, 00:14:55.311 "num_base_bdevs_discovered": 2, 00:14:55.311 "num_base_bdevs_operational": 2, 00:14:55.311 "base_bdevs_list": [ 00:14:55.311 { 00:14:55.311 "name": null, 00:14:55.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.311 "is_configured": false, 00:14:55.311 "data_offset": 0, 00:14:55.311 "data_size": 63488 00:14:55.311 }, 00:14:55.311 { 00:14:55.311 "name": null, 00:14:55.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.311 "is_configured": false, 00:14:55.311 "data_offset": 2048, 00:14:55.311 "data_size": 63488 00:14:55.311 }, 00:14:55.311 { 00:14:55.311 "name": "BaseBdev3", 00:14:55.311 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:55.311 "is_configured": true, 00:14:55.311 "data_offset": 2048, 00:14:55.311 "data_size": 63488 00:14:55.311 }, 00:14:55.311 { 00:14:55.311 "name": "BaseBdev4", 00:14:55.311 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:55.311 "is_configured": true, 00:14:55.311 "data_offset": 2048, 00:14:55.311 "data_size": 63488 00:14:55.311 } 00:14:55.311 ] 00:14:55.311 }' 00:14:55.311 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.311 15:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.570 15:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:55.570 15:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.570 15:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.570 [2024-11-25 15:41:54.170869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:55.570 [2024-11-25 15:41:54.171153] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:55.570 [2024-11-25 15:41:54.171218] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:55.570 [2024-11-25 15:41:54.171274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:55.570 [2024-11-25 15:41:54.186038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:55.570 15:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.570 15:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:55.570 [2024-11-25 15:41:54.187928] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.951 "name": "raid_bdev1", 00:14:56.951 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:56.951 "strip_size_kb": 0, 00:14:56.951 "state": "online", 00:14:56.951 "raid_level": "raid1", 00:14:56.951 "superblock": true, 00:14:56.951 "num_base_bdevs": 4, 00:14:56.951 "num_base_bdevs_discovered": 3, 00:14:56.951 "num_base_bdevs_operational": 3, 00:14:56.951 "process": { 00:14:56.951 "type": "rebuild", 00:14:56.951 "target": "spare", 00:14:56.951 "progress": { 00:14:56.951 "blocks": 20480, 00:14:56.951 "percent": 32 00:14:56.951 } 00:14:56.951 }, 00:14:56.951 "base_bdevs_list": [ 00:14:56.951 { 00:14:56.951 "name": "spare", 00:14:56.951 "uuid": "3a8b7c39-51a4-55f1-9b09-ecb5c19f9371", 00:14:56.951 "is_configured": true, 00:14:56.951 "data_offset": 2048, 00:14:56.951 "data_size": 63488 00:14:56.951 }, 00:14:56.951 { 00:14:56.951 "name": null, 00:14:56.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.951 "is_configured": false, 00:14:56.951 "data_offset": 2048, 00:14:56.951 "data_size": 63488 00:14:56.951 }, 00:14:56.951 { 00:14:56.951 "name": "BaseBdev3", 00:14:56.951 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:56.951 "is_configured": true, 00:14:56.951 "data_offset": 2048, 00:14:56.951 "data_size": 63488 00:14:56.951 }, 00:14:56.951 { 00:14:56.951 "name": "BaseBdev4", 00:14:56.951 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:56.951 "is_configured": true, 00:14:56.951 "data_offset": 2048, 00:14:56.951 "data_size": 63488 00:14:56.951 } 00:14:56.951 ] 00:14:56.951 }' 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.951 [2024-11-25 15:41:55.343249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:56.951 [2024-11-25 15:41:55.392599] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:56.951 [2024-11-25 15:41:55.392718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.951 [2024-11-25 15:41:55.392757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:56.951 [2024-11-25 15:41:55.392777] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.951 "name": "raid_bdev1", 00:14:56.951 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:56.951 "strip_size_kb": 0, 00:14:56.951 "state": "online", 00:14:56.951 "raid_level": "raid1", 00:14:56.951 "superblock": true, 00:14:56.951 "num_base_bdevs": 4, 00:14:56.951 "num_base_bdevs_discovered": 2, 00:14:56.951 "num_base_bdevs_operational": 2, 00:14:56.951 "base_bdevs_list": [ 00:14:56.951 { 00:14:56.951 "name": null, 00:14:56.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.951 "is_configured": false, 00:14:56.951 "data_offset": 0, 00:14:56.951 "data_size": 63488 00:14:56.951 }, 00:14:56.951 { 00:14:56.951 "name": null, 00:14:56.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.951 "is_configured": false, 00:14:56.951 "data_offset": 2048, 00:14:56.951 "data_size": 63488 00:14:56.951 }, 00:14:56.951 { 00:14:56.951 "name": "BaseBdev3", 00:14:56.951 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:56.951 "is_configured": true, 00:14:56.951 "data_offset": 2048, 00:14:56.951 "data_size": 63488 00:14:56.951 }, 00:14:56.951 { 00:14:56.951 "name": "BaseBdev4", 00:14:56.951 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:56.951 "is_configured": true, 00:14:56.951 "data_offset": 2048, 00:14:56.951 "data_size": 63488 00:14:56.951 } 00:14:56.951 ] 00:14:56.951 }' 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.951 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.211 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:57.211 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.211 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.470 [2024-11-25 15:41:55.895623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:57.470 [2024-11-25 15:41:55.895688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.470 [2024-11-25 15:41:55.895717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:57.470 [2024-11-25 15:41:55.895727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.471 [2024-11-25 15:41:55.896223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.471 [2024-11-25 15:41:55.896242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:57.471 [2024-11-25 15:41:55.896346] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:57.471 [2024-11-25 15:41:55.896359] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:57.471 [2024-11-25 15:41:55.896373] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:57.471 [2024-11-25 15:41:55.896391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:57.471 [2024-11-25 15:41:55.910478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:57.471 spare 00:14:57.471 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.471 [2024-11-25 15:41:55.912247] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:57.471 15:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:58.408 15:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.409 15:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.409 15:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.409 15:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.409 15:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.409 15:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.409 15:41:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.409 15:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.409 15:41:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.409 15:41:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.409 15:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.409 "name": "raid_bdev1", 00:14:58.409 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:58.409 "strip_size_kb": 0, 00:14:58.409 "state": "online", 00:14:58.409 "raid_level": "raid1", 00:14:58.409 "superblock": true, 00:14:58.409 "num_base_bdevs": 4, 00:14:58.409 "num_base_bdevs_discovered": 3, 00:14:58.409 "num_base_bdevs_operational": 3, 00:14:58.409 "process": { 00:14:58.409 "type": "rebuild", 00:14:58.409 "target": "spare", 00:14:58.409 "progress": { 00:14:58.409 "blocks": 20480, 00:14:58.409 "percent": 32 00:14:58.409 } 00:14:58.409 }, 00:14:58.409 "base_bdevs_list": [ 00:14:58.409 { 00:14:58.409 "name": "spare", 00:14:58.409 "uuid": "3a8b7c39-51a4-55f1-9b09-ecb5c19f9371", 00:14:58.409 "is_configured": true, 00:14:58.409 "data_offset": 2048, 00:14:58.409 "data_size": 63488 00:14:58.409 }, 00:14:58.409 { 00:14:58.409 "name": null, 00:14:58.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.409 "is_configured": false, 00:14:58.409 "data_offset": 2048, 00:14:58.409 "data_size": 63488 00:14:58.409 }, 00:14:58.409 { 00:14:58.409 "name": "BaseBdev3", 00:14:58.409 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:58.409 "is_configured": true, 00:14:58.409 "data_offset": 2048, 00:14:58.409 "data_size": 63488 00:14:58.409 }, 00:14:58.409 { 00:14:58.409 "name": "BaseBdev4", 00:14:58.409 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:58.409 "is_configured": true, 00:14:58.409 "data_offset": 2048, 00:14:58.409 "data_size": 63488 00:14:58.409 } 00:14:58.409 ] 00:14:58.409 }' 00:14:58.409 15:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.409 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.409 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.409 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.409 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:58.409 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.409 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.409 [2024-11-25 15:41:57.072150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:58.669 [2024-11-25 15:41:57.116944] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:58.669 [2024-11-25 15:41:57.117098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.669 [2024-11-25 15:41:57.117118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:58.669 [2024-11-25 15:41:57.117127] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:58.669 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.669 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:58.669 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.669 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.669 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.669 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.669 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:58.669 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.669 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.669 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.669 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.669 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.669 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.669 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.669 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.669 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.669 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.669 "name": "raid_bdev1", 00:14:58.669 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:58.669 "strip_size_kb": 0, 00:14:58.669 "state": "online", 00:14:58.669 "raid_level": "raid1", 00:14:58.669 "superblock": true, 00:14:58.669 "num_base_bdevs": 4, 00:14:58.669 "num_base_bdevs_discovered": 2, 00:14:58.669 "num_base_bdevs_operational": 2, 00:14:58.669 "base_bdevs_list": [ 00:14:58.669 { 00:14:58.669 "name": null, 00:14:58.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.669 "is_configured": false, 00:14:58.669 "data_offset": 0, 00:14:58.669 "data_size": 63488 00:14:58.669 }, 00:14:58.669 { 00:14:58.669 "name": null, 00:14:58.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.669 "is_configured": false, 00:14:58.669 "data_offset": 2048, 00:14:58.669 "data_size": 63488 00:14:58.669 }, 00:14:58.669 { 00:14:58.669 "name": "BaseBdev3", 00:14:58.669 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:58.669 "is_configured": true, 00:14:58.669 "data_offset": 2048, 00:14:58.669 "data_size": 63488 00:14:58.669 }, 00:14:58.669 { 00:14:58.669 "name": "BaseBdev4", 00:14:58.669 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:58.669 "is_configured": true, 00:14:58.669 "data_offset": 2048, 00:14:58.669 "data_size": 63488 00:14:58.669 } 00:14:58.669 ] 00:14:58.669 }' 00:14:58.669 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.669 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.930 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:58.930 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.930 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:58.930 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:58.930 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.930 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.930 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.930 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.930 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.930 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.930 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.930 "name": "raid_bdev1", 00:14:58.930 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:14:58.930 "strip_size_kb": 0, 00:14:58.930 "state": "online", 00:14:58.930 "raid_level": "raid1", 00:14:58.930 "superblock": true, 00:14:58.930 "num_base_bdevs": 4, 00:14:58.930 "num_base_bdevs_discovered": 2, 00:14:58.930 "num_base_bdevs_operational": 2, 00:14:58.930 "base_bdevs_list": [ 00:14:58.930 { 00:14:58.930 "name": null, 00:14:58.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.930 "is_configured": false, 00:14:58.930 "data_offset": 0, 00:14:58.930 "data_size": 63488 00:14:58.930 }, 00:14:58.930 { 00:14:58.930 "name": null, 00:14:58.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.930 "is_configured": false, 00:14:58.930 "data_offset": 2048, 00:14:58.930 "data_size": 63488 00:14:58.930 }, 00:14:58.930 { 00:14:58.930 "name": "BaseBdev3", 00:14:58.930 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:14:58.930 "is_configured": true, 00:14:58.930 "data_offset": 2048, 00:14:58.930 "data_size": 63488 00:14:58.930 }, 00:14:58.930 { 00:14:58.930 "name": "BaseBdev4", 00:14:58.930 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:14:58.930 "is_configured": true, 00:14:58.930 "data_offset": 2048, 00:14:58.930 "data_size": 63488 00:14:58.930 } 00:14:58.930 ] 00:14:58.930 }' 00:14:58.930 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.191 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:59.191 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.191 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:59.191 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:59.191 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.191 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.191 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.191 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:59.191 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.191 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.191 [2024-11-25 15:41:57.724192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:59.191 [2024-11-25 15:41:57.724291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.191 [2024-11-25 15:41:57.724316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:59.191 [2024-11-25 15:41:57.724327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.191 [2024-11-25 15:41:57.724767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.191 [2024-11-25 15:41:57.724787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:59.191 [2024-11-25 15:41:57.724867] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:59.191 [2024-11-25 15:41:57.724885] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:59.191 [2024-11-25 15:41:57.724893] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:59.191 [2024-11-25 15:41:57.724905] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:59.191 BaseBdev1 00:14:59.191 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.191 15:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:00.129 15:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:00.129 15:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.129 15:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.129 15:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.129 15:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.129 15:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.129 15:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.129 15:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.129 15:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.129 15:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.129 15:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.129 15:41:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.129 15:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.129 15:41:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.129 15:41:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.129 15:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.129 "name": "raid_bdev1", 00:15:00.129 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:15:00.129 "strip_size_kb": 0, 00:15:00.129 "state": "online", 00:15:00.129 "raid_level": "raid1", 00:15:00.129 "superblock": true, 00:15:00.129 "num_base_bdevs": 4, 00:15:00.129 "num_base_bdevs_discovered": 2, 00:15:00.129 "num_base_bdevs_operational": 2, 00:15:00.129 "base_bdevs_list": [ 00:15:00.129 { 00:15:00.129 "name": null, 00:15:00.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.129 "is_configured": false, 00:15:00.129 "data_offset": 0, 00:15:00.129 "data_size": 63488 00:15:00.129 }, 00:15:00.129 { 00:15:00.129 "name": null, 00:15:00.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.129 "is_configured": false, 00:15:00.129 "data_offset": 2048, 00:15:00.129 "data_size": 63488 00:15:00.129 }, 00:15:00.129 { 00:15:00.129 "name": "BaseBdev3", 00:15:00.129 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:15:00.129 "is_configured": true, 00:15:00.129 "data_offset": 2048, 00:15:00.129 "data_size": 63488 00:15:00.129 }, 00:15:00.129 { 00:15:00.129 "name": "BaseBdev4", 00:15:00.129 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:15:00.129 "is_configured": true, 00:15:00.129 "data_offset": 2048, 00:15:00.129 "data_size": 63488 00:15:00.129 } 00:15:00.129 ] 00:15:00.129 }' 00:15:00.129 15:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.129 15:41:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.695 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:00.695 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.695 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:00.695 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:00.695 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.695 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.695 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.695 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.695 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.695 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.695 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.695 "name": "raid_bdev1", 00:15:00.695 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:15:00.695 "strip_size_kb": 0, 00:15:00.695 "state": "online", 00:15:00.695 "raid_level": "raid1", 00:15:00.695 "superblock": true, 00:15:00.695 "num_base_bdevs": 4, 00:15:00.695 "num_base_bdevs_discovered": 2, 00:15:00.695 "num_base_bdevs_operational": 2, 00:15:00.695 "base_bdevs_list": [ 00:15:00.695 { 00:15:00.695 "name": null, 00:15:00.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.695 "is_configured": false, 00:15:00.695 "data_offset": 0, 00:15:00.695 "data_size": 63488 00:15:00.695 }, 00:15:00.695 { 00:15:00.695 "name": null, 00:15:00.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.695 "is_configured": false, 00:15:00.695 "data_offset": 2048, 00:15:00.695 "data_size": 63488 00:15:00.695 }, 00:15:00.695 { 00:15:00.695 "name": "BaseBdev3", 00:15:00.695 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:15:00.695 "is_configured": true, 00:15:00.695 "data_offset": 2048, 00:15:00.695 "data_size": 63488 00:15:00.695 }, 00:15:00.695 { 00:15:00.695 "name": "BaseBdev4", 00:15:00.695 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:15:00.695 "is_configured": true, 00:15:00.695 "data_offset": 2048, 00:15:00.695 "data_size": 63488 00:15:00.695 } 00:15:00.695 ] 00:15:00.695 }' 00:15:00.695 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.695 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:00.695 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.695 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:00.695 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:00.695 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:00.695 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:00.695 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:00.696 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:00.696 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:00.696 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:00.696 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:00.696 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.696 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.696 [2024-11-25 15:41:59.345712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.696 [2024-11-25 15:41:59.345881] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:00.696 [2024-11-25 15:41:59.345893] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:00.696 request: 00:15:00.696 { 00:15:00.696 "base_bdev": "BaseBdev1", 00:15:00.696 "raid_bdev": "raid_bdev1", 00:15:00.696 "method": "bdev_raid_add_base_bdev", 00:15:00.696 "req_id": 1 00:15:00.696 } 00:15:00.696 Got JSON-RPC error response 00:15:00.696 response: 00:15:00.696 { 00:15:00.696 "code": -22, 00:15:00.696 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:00.696 } 00:15:00.696 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:00.696 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:00.696 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:00.696 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:00.696 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:00.696 15:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:02.077 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:02.077 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.077 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.077 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.077 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.077 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:02.077 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.077 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.077 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.077 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.077 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.077 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.077 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.077 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.077 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.077 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.077 "name": "raid_bdev1", 00:15:02.077 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:15:02.077 "strip_size_kb": 0, 00:15:02.077 "state": "online", 00:15:02.077 "raid_level": "raid1", 00:15:02.077 "superblock": true, 00:15:02.077 "num_base_bdevs": 4, 00:15:02.077 "num_base_bdevs_discovered": 2, 00:15:02.077 "num_base_bdevs_operational": 2, 00:15:02.077 "base_bdevs_list": [ 00:15:02.077 { 00:15:02.077 "name": null, 00:15:02.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.077 "is_configured": false, 00:15:02.077 "data_offset": 0, 00:15:02.077 "data_size": 63488 00:15:02.077 }, 00:15:02.077 { 00:15:02.077 "name": null, 00:15:02.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.077 "is_configured": false, 00:15:02.077 "data_offset": 2048, 00:15:02.077 "data_size": 63488 00:15:02.077 }, 00:15:02.077 { 00:15:02.077 "name": "BaseBdev3", 00:15:02.077 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:15:02.077 "is_configured": true, 00:15:02.077 "data_offset": 2048, 00:15:02.077 "data_size": 63488 00:15:02.077 }, 00:15:02.077 { 00:15:02.077 "name": "BaseBdev4", 00:15:02.077 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:15:02.077 "is_configured": true, 00:15:02.077 "data_offset": 2048, 00:15:02.077 "data_size": 63488 00:15:02.077 } 00:15:02.077 ] 00:15:02.077 }' 00:15:02.077 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.077 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.338 "name": "raid_bdev1", 00:15:02.338 "uuid": "e4c0842f-2c6f-42a4-b7c2-56c4cefd57d6", 00:15:02.338 "strip_size_kb": 0, 00:15:02.338 "state": "online", 00:15:02.338 "raid_level": "raid1", 00:15:02.338 "superblock": true, 00:15:02.338 "num_base_bdevs": 4, 00:15:02.338 "num_base_bdevs_discovered": 2, 00:15:02.338 "num_base_bdevs_operational": 2, 00:15:02.338 "base_bdevs_list": [ 00:15:02.338 { 00:15:02.338 "name": null, 00:15:02.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.338 "is_configured": false, 00:15:02.338 "data_offset": 0, 00:15:02.338 "data_size": 63488 00:15:02.338 }, 00:15:02.338 { 00:15:02.338 "name": null, 00:15:02.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.338 "is_configured": false, 00:15:02.338 "data_offset": 2048, 00:15:02.338 "data_size": 63488 00:15:02.338 }, 00:15:02.338 { 00:15:02.338 "name": "BaseBdev3", 00:15:02.338 "uuid": "a4780a32-a32a-5ff4-8062-389b86d4d450", 00:15:02.338 "is_configured": true, 00:15:02.338 "data_offset": 2048, 00:15:02.338 "data_size": 63488 00:15:02.338 }, 00:15:02.338 { 00:15:02.338 "name": "BaseBdev4", 00:15:02.338 "uuid": "2d81ba44-1406-50b5-b408-6225ed1c9015", 00:15:02.338 "is_configured": true, 00:15:02.338 "data_offset": 2048, 00:15:02.338 "data_size": 63488 00:15:02.338 } 00:15:02.338 ] 00:15:02.338 }' 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78807 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 78807 ']' 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 78807 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78807 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:02.338 killing process with pid 78807 00:15:02.338 Received shutdown signal, test time was about 17.942968 seconds 00:15:02.338 00:15:02.338 Latency(us) 00:15:02.338 [2024-11-25T15:42:01.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.338 [2024-11-25T15:42:01.019Z] =================================================================================================================== 00:15:02.338 [2024-11-25T15:42:01.019Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78807' 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 78807 00:15:02.338 [2024-11-25 15:42:00.944975] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:02.338 [2024-11-25 15:42:00.945102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.338 [2024-11-25 15:42:00.945170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.338 [2024-11-25 15:42:00.945179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:02.338 15:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 78807 00:15:02.909 [2024-11-25 15:42:01.341864] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:03.849 15:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:03.849 00:15:03.849 real 0m21.234s 00:15:03.849 user 0m27.758s 00:15:03.849 sys 0m2.494s 00:15:03.849 ************************************ 00:15:03.849 END TEST raid_rebuild_test_sb_io 00:15:03.849 ************************************ 00:15:03.849 15:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:03.849 15:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.849 15:42:02 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:03.849 15:42:02 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:03.849 15:42:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:03.849 15:42:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.849 15:42:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:03.849 ************************************ 00:15:03.849 START TEST raid5f_state_function_test 00:15:03.849 ************************************ 00:15:03.849 15:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:15:03.849 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:03.849 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:03.849 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:03.849 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:03.849 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:03.849 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:03.849 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:03.849 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:03.849 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:03.849 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:03.849 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:03.849 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79529 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:03.850 Process raid pid: 79529 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79529' 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79529 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79529 ']' 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.850 15:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.110 [2024-11-25 15:42:02.597683] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:15:04.110 [2024-11-25 15:42:02.597884] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.110 [2024-11-25 15:42:02.769413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.370 [2024-11-25 15:42:02.875456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.629 [2024-11-25 15:42:03.056473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.629 [2024-11-25 15:42:03.056595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.890 [2024-11-25 15:42:03.416406] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:04.890 [2024-11-25 15:42:03.416455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:04.890 [2024-11-25 15:42:03.416465] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:04.890 [2024-11-25 15:42:03.416475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:04.890 [2024-11-25 15:42:03.416481] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:04.890 [2024-11-25 15:42:03.416490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.890 "name": "Existed_Raid", 00:15:04.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.890 "strip_size_kb": 64, 00:15:04.890 "state": "configuring", 00:15:04.890 "raid_level": "raid5f", 00:15:04.890 "superblock": false, 00:15:04.890 "num_base_bdevs": 3, 00:15:04.890 "num_base_bdevs_discovered": 0, 00:15:04.890 "num_base_bdevs_operational": 3, 00:15:04.890 "base_bdevs_list": [ 00:15:04.890 { 00:15:04.890 "name": "BaseBdev1", 00:15:04.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.890 "is_configured": false, 00:15:04.890 "data_offset": 0, 00:15:04.890 "data_size": 0 00:15:04.890 }, 00:15:04.890 { 00:15:04.890 "name": "BaseBdev2", 00:15:04.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.890 "is_configured": false, 00:15:04.890 "data_offset": 0, 00:15:04.890 "data_size": 0 00:15:04.890 }, 00:15:04.890 { 00:15:04.890 "name": "BaseBdev3", 00:15:04.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.890 "is_configured": false, 00:15:04.890 "data_offset": 0, 00:15:04.890 "data_size": 0 00:15:04.890 } 00:15:04.890 ] 00:15:04.890 }' 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.890 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.462 [2024-11-25 15:42:03.863543] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:05.462 [2024-11-25 15:42:03.863574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.462 [2024-11-25 15:42:03.871545] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:05.462 [2024-11-25 15:42:03.871589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:05.462 [2024-11-25 15:42:03.871598] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:05.462 [2024-11-25 15:42:03.871606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:05.462 [2024-11-25 15:42:03.871613] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:05.462 [2024-11-25 15:42:03.871621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.462 [2024-11-25 15:42:03.916584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.462 BaseBdev1 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.462 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.462 [ 00:15:05.462 { 00:15:05.462 "name": "BaseBdev1", 00:15:05.462 "aliases": [ 00:15:05.462 "58b40c25-fb9b-4f74-9b7a-485f2fc07379" 00:15:05.462 ], 00:15:05.462 "product_name": "Malloc disk", 00:15:05.462 "block_size": 512, 00:15:05.462 "num_blocks": 65536, 00:15:05.462 "uuid": "58b40c25-fb9b-4f74-9b7a-485f2fc07379", 00:15:05.462 "assigned_rate_limits": { 00:15:05.462 "rw_ios_per_sec": 0, 00:15:05.462 "rw_mbytes_per_sec": 0, 00:15:05.462 "r_mbytes_per_sec": 0, 00:15:05.462 "w_mbytes_per_sec": 0 00:15:05.462 }, 00:15:05.462 "claimed": true, 00:15:05.462 "claim_type": "exclusive_write", 00:15:05.462 "zoned": false, 00:15:05.462 "supported_io_types": { 00:15:05.462 "read": true, 00:15:05.462 "write": true, 00:15:05.462 "unmap": true, 00:15:05.462 "flush": true, 00:15:05.462 "reset": true, 00:15:05.462 "nvme_admin": false, 00:15:05.462 "nvme_io": false, 00:15:05.462 "nvme_io_md": false, 00:15:05.462 "write_zeroes": true, 00:15:05.462 "zcopy": true, 00:15:05.462 "get_zone_info": false, 00:15:05.462 "zone_management": false, 00:15:05.462 "zone_append": false, 00:15:05.462 "compare": false, 00:15:05.462 "compare_and_write": false, 00:15:05.462 "abort": true, 00:15:05.462 "seek_hole": false, 00:15:05.462 "seek_data": false, 00:15:05.462 "copy": true, 00:15:05.462 "nvme_iov_md": false 00:15:05.462 }, 00:15:05.462 "memory_domains": [ 00:15:05.462 { 00:15:05.462 "dma_device_id": "system", 00:15:05.462 "dma_device_type": 1 00:15:05.462 }, 00:15:05.462 { 00:15:05.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.463 "dma_device_type": 2 00:15:05.463 } 00:15:05.463 ], 00:15:05.463 "driver_specific": {} 00:15:05.463 } 00:15:05.463 ] 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.463 "name": "Existed_Raid", 00:15:05.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.463 "strip_size_kb": 64, 00:15:05.463 "state": "configuring", 00:15:05.463 "raid_level": "raid5f", 00:15:05.463 "superblock": false, 00:15:05.463 "num_base_bdevs": 3, 00:15:05.463 "num_base_bdevs_discovered": 1, 00:15:05.463 "num_base_bdevs_operational": 3, 00:15:05.463 "base_bdevs_list": [ 00:15:05.463 { 00:15:05.463 "name": "BaseBdev1", 00:15:05.463 "uuid": "58b40c25-fb9b-4f74-9b7a-485f2fc07379", 00:15:05.463 "is_configured": true, 00:15:05.463 "data_offset": 0, 00:15:05.463 "data_size": 65536 00:15:05.463 }, 00:15:05.463 { 00:15:05.463 "name": "BaseBdev2", 00:15:05.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.463 "is_configured": false, 00:15:05.463 "data_offset": 0, 00:15:05.463 "data_size": 0 00:15:05.463 }, 00:15:05.463 { 00:15:05.463 "name": "BaseBdev3", 00:15:05.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.463 "is_configured": false, 00:15:05.463 "data_offset": 0, 00:15:05.463 "data_size": 0 00:15:05.463 } 00:15:05.463 ] 00:15:05.463 }' 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.463 15:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.724 [2024-11-25 15:42:04.331897] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:05.724 [2024-11-25 15:42:04.331984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.724 [2024-11-25 15:42:04.343935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.724 [2024-11-25 15:42:04.345700] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:05.724 [2024-11-25 15:42:04.345769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:05.724 [2024-11-25 15:42:04.345814] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:05.724 [2024-11-25 15:42:04.345836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.724 "name": "Existed_Raid", 00:15:05.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.724 "strip_size_kb": 64, 00:15:05.724 "state": "configuring", 00:15:05.724 "raid_level": "raid5f", 00:15:05.724 "superblock": false, 00:15:05.724 "num_base_bdevs": 3, 00:15:05.724 "num_base_bdevs_discovered": 1, 00:15:05.724 "num_base_bdevs_operational": 3, 00:15:05.724 "base_bdevs_list": [ 00:15:05.724 { 00:15:05.724 "name": "BaseBdev1", 00:15:05.724 "uuid": "58b40c25-fb9b-4f74-9b7a-485f2fc07379", 00:15:05.724 "is_configured": true, 00:15:05.724 "data_offset": 0, 00:15:05.724 "data_size": 65536 00:15:05.724 }, 00:15:05.724 { 00:15:05.724 "name": "BaseBdev2", 00:15:05.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.724 "is_configured": false, 00:15:05.724 "data_offset": 0, 00:15:05.724 "data_size": 0 00:15:05.724 }, 00:15:05.724 { 00:15:05.724 "name": "BaseBdev3", 00:15:05.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.724 "is_configured": false, 00:15:05.724 "data_offset": 0, 00:15:05.724 "data_size": 0 00:15:05.724 } 00:15:05.724 ] 00:15:05.724 }' 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.724 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.296 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:06.296 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.296 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.296 [2024-11-25 15:42:04.841507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:06.296 BaseBdev2 00:15:06.296 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.296 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:06.296 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:06.296 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:06.296 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:06.296 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:06.296 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:06.296 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:06.296 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.296 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.296 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.296 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:06.296 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.296 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.296 [ 00:15:06.297 { 00:15:06.297 "name": "BaseBdev2", 00:15:06.297 "aliases": [ 00:15:06.297 "79a088b7-bda4-4764-b8ec-61875a3a7a5d" 00:15:06.297 ], 00:15:06.297 "product_name": "Malloc disk", 00:15:06.297 "block_size": 512, 00:15:06.297 "num_blocks": 65536, 00:15:06.297 "uuid": "79a088b7-bda4-4764-b8ec-61875a3a7a5d", 00:15:06.297 "assigned_rate_limits": { 00:15:06.297 "rw_ios_per_sec": 0, 00:15:06.297 "rw_mbytes_per_sec": 0, 00:15:06.297 "r_mbytes_per_sec": 0, 00:15:06.297 "w_mbytes_per_sec": 0 00:15:06.297 }, 00:15:06.297 "claimed": true, 00:15:06.297 "claim_type": "exclusive_write", 00:15:06.297 "zoned": false, 00:15:06.297 "supported_io_types": { 00:15:06.297 "read": true, 00:15:06.297 "write": true, 00:15:06.297 "unmap": true, 00:15:06.297 "flush": true, 00:15:06.297 "reset": true, 00:15:06.297 "nvme_admin": false, 00:15:06.297 "nvme_io": false, 00:15:06.297 "nvme_io_md": false, 00:15:06.297 "write_zeroes": true, 00:15:06.297 "zcopy": true, 00:15:06.297 "get_zone_info": false, 00:15:06.297 "zone_management": false, 00:15:06.297 "zone_append": false, 00:15:06.297 "compare": false, 00:15:06.297 "compare_and_write": false, 00:15:06.297 "abort": true, 00:15:06.297 "seek_hole": false, 00:15:06.297 "seek_data": false, 00:15:06.297 "copy": true, 00:15:06.297 "nvme_iov_md": false 00:15:06.297 }, 00:15:06.297 "memory_domains": [ 00:15:06.297 { 00:15:06.297 "dma_device_id": "system", 00:15:06.297 "dma_device_type": 1 00:15:06.297 }, 00:15:06.297 { 00:15:06.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.297 "dma_device_type": 2 00:15:06.297 } 00:15:06.297 ], 00:15:06.297 "driver_specific": {} 00:15:06.297 } 00:15:06.297 ] 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.297 "name": "Existed_Raid", 00:15:06.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.297 "strip_size_kb": 64, 00:15:06.297 "state": "configuring", 00:15:06.297 "raid_level": "raid5f", 00:15:06.297 "superblock": false, 00:15:06.297 "num_base_bdevs": 3, 00:15:06.297 "num_base_bdevs_discovered": 2, 00:15:06.297 "num_base_bdevs_operational": 3, 00:15:06.297 "base_bdevs_list": [ 00:15:06.297 { 00:15:06.297 "name": "BaseBdev1", 00:15:06.297 "uuid": "58b40c25-fb9b-4f74-9b7a-485f2fc07379", 00:15:06.297 "is_configured": true, 00:15:06.297 "data_offset": 0, 00:15:06.297 "data_size": 65536 00:15:06.297 }, 00:15:06.297 { 00:15:06.297 "name": "BaseBdev2", 00:15:06.297 "uuid": "79a088b7-bda4-4764-b8ec-61875a3a7a5d", 00:15:06.297 "is_configured": true, 00:15:06.297 "data_offset": 0, 00:15:06.297 "data_size": 65536 00:15:06.297 }, 00:15:06.297 { 00:15:06.297 "name": "BaseBdev3", 00:15:06.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.297 "is_configured": false, 00:15:06.297 "data_offset": 0, 00:15:06.297 "data_size": 0 00:15:06.297 } 00:15:06.297 ] 00:15:06.297 }' 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.297 15:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.868 [2024-11-25 15:42:05.363802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:06.868 [2024-11-25 15:42:05.363946] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:06.868 [2024-11-25 15:42:05.363977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:06.868 [2024-11-25 15:42:05.364313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:06.868 [2024-11-25 15:42:05.369619] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:06.868 [2024-11-25 15:42:05.369673] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:06.868 [2024-11-25 15:42:05.369994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.868 BaseBdev3 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.868 [ 00:15:06.868 { 00:15:06.868 "name": "BaseBdev3", 00:15:06.868 "aliases": [ 00:15:06.868 "873b56a2-6756-4fc3-ab29-fdec960bc1a7" 00:15:06.868 ], 00:15:06.868 "product_name": "Malloc disk", 00:15:06.868 "block_size": 512, 00:15:06.868 "num_blocks": 65536, 00:15:06.868 "uuid": "873b56a2-6756-4fc3-ab29-fdec960bc1a7", 00:15:06.868 "assigned_rate_limits": { 00:15:06.868 "rw_ios_per_sec": 0, 00:15:06.868 "rw_mbytes_per_sec": 0, 00:15:06.868 "r_mbytes_per_sec": 0, 00:15:06.868 "w_mbytes_per_sec": 0 00:15:06.868 }, 00:15:06.868 "claimed": true, 00:15:06.868 "claim_type": "exclusive_write", 00:15:06.868 "zoned": false, 00:15:06.868 "supported_io_types": { 00:15:06.868 "read": true, 00:15:06.868 "write": true, 00:15:06.868 "unmap": true, 00:15:06.868 "flush": true, 00:15:06.868 "reset": true, 00:15:06.868 "nvme_admin": false, 00:15:06.868 "nvme_io": false, 00:15:06.868 "nvme_io_md": false, 00:15:06.868 "write_zeroes": true, 00:15:06.868 "zcopy": true, 00:15:06.868 "get_zone_info": false, 00:15:06.868 "zone_management": false, 00:15:06.868 "zone_append": false, 00:15:06.868 "compare": false, 00:15:06.868 "compare_and_write": false, 00:15:06.868 "abort": true, 00:15:06.868 "seek_hole": false, 00:15:06.868 "seek_data": false, 00:15:06.868 "copy": true, 00:15:06.868 "nvme_iov_md": false 00:15:06.868 }, 00:15:06.868 "memory_domains": [ 00:15:06.868 { 00:15:06.868 "dma_device_id": "system", 00:15:06.868 "dma_device_type": 1 00:15:06.868 }, 00:15:06.868 { 00:15:06.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.868 "dma_device_type": 2 00:15:06.868 } 00:15:06.868 ], 00:15:06.868 "driver_specific": {} 00:15:06.868 } 00:15:06.868 ] 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.868 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.868 "name": "Existed_Raid", 00:15:06.868 "uuid": "908ea42e-6527-46fc-8453-6332e92d0d6b", 00:15:06.868 "strip_size_kb": 64, 00:15:06.868 "state": "online", 00:15:06.868 "raid_level": "raid5f", 00:15:06.868 "superblock": false, 00:15:06.868 "num_base_bdevs": 3, 00:15:06.868 "num_base_bdevs_discovered": 3, 00:15:06.868 "num_base_bdevs_operational": 3, 00:15:06.868 "base_bdevs_list": [ 00:15:06.868 { 00:15:06.868 "name": "BaseBdev1", 00:15:06.868 "uuid": "58b40c25-fb9b-4f74-9b7a-485f2fc07379", 00:15:06.868 "is_configured": true, 00:15:06.868 "data_offset": 0, 00:15:06.868 "data_size": 65536 00:15:06.868 }, 00:15:06.868 { 00:15:06.869 "name": "BaseBdev2", 00:15:06.869 "uuid": "79a088b7-bda4-4764-b8ec-61875a3a7a5d", 00:15:06.869 "is_configured": true, 00:15:06.869 "data_offset": 0, 00:15:06.869 "data_size": 65536 00:15:06.869 }, 00:15:06.869 { 00:15:06.869 "name": "BaseBdev3", 00:15:06.869 "uuid": "873b56a2-6756-4fc3-ab29-fdec960bc1a7", 00:15:06.869 "is_configured": true, 00:15:06.869 "data_offset": 0, 00:15:06.869 "data_size": 65536 00:15:06.869 } 00:15:06.869 ] 00:15:06.869 }' 00:15:06.869 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.869 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.439 [2024-11-25 15:42:05.871512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:07.439 "name": "Existed_Raid", 00:15:07.439 "aliases": [ 00:15:07.439 "908ea42e-6527-46fc-8453-6332e92d0d6b" 00:15:07.439 ], 00:15:07.439 "product_name": "Raid Volume", 00:15:07.439 "block_size": 512, 00:15:07.439 "num_blocks": 131072, 00:15:07.439 "uuid": "908ea42e-6527-46fc-8453-6332e92d0d6b", 00:15:07.439 "assigned_rate_limits": { 00:15:07.439 "rw_ios_per_sec": 0, 00:15:07.439 "rw_mbytes_per_sec": 0, 00:15:07.439 "r_mbytes_per_sec": 0, 00:15:07.439 "w_mbytes_per_sec": 0 00:15:07.439 }, 00:15:07.439 "claimed": false, 00:15:07.439 "zoned": false, 00:15:07.439 "supported_io_types": { 00:15:07.439 "read": true, 00:15:07.439 "write": true, 00:15:07.439 "unmap": false, 00:15:07.439 "flush": false, 00:15:07.439 "reset": true, 00:15:07.439 "nvme_admin": false, 00:15:07.439 "nvme_io": false, 00:15:07.439 "nvme_io_md": false, 00:15:07.439 "write_zeroes": true, 00:15:07.439 "zcopy": false, 00:15:07.439 "get_zone_info": false, 00:15:07.439 "zone_management": false, 00:15:07.439 "zone_append": false, 00:15:07.439 "compare": false, 00:15:07.439 "compare_and_write": false, 00:15:07.439 "abort": false, 00:15:07.439 "seek_hole": false, 00:15:07.439 "seek_data": false, 00:15:07.439 "copy": false, 00:15:07.439 "nvme_iov_md": false 00:15:07.439 }, 00:15:07.439 "driver_specific": { 00:15:07.439 "raid": { 00:15:07.439 "uuid": "908ea42e-6527-46fc-8453-6332e92d0d6b", 00:15:07.439 "strip_size_kb": 64, 00:15:07.439 "state": "online", 00:15:07.439 "raid_level": "raid5f", 00:15:07.439 "superblock": false, 00:15:07.439 "num_base_bdevs": 3, 00:15:07.439 "num_base_bdevs_discovered": 3, 00:15:07.439 "num_base_bdevs_operational": 3, 00:15:07.439 "base_bdevs_list": [ 00:15:07.439 { 00:15:07.439 "name": "BaseBdev1", 00:15:07.439 "uuid": "58b40c25-fb9b-4f74-9b7a-485f2fc07379", 00:15:07.439 "is_configured": true, 00:15:07.439 "data_offset": 0, 00:15:07.439 "data_size": 65536 00:15:07.439 }, 00:15:07.439 { 00:15:07.439 "name": "BaseBdev2", 00:15:07.439 "uuid": "79a088b7-bda4-4764-b8ec-61875a3a7a5d", 00:15:07.439 "is_configured": true, 00:15:07.439 "data_offset": 0, 00:15:07.439 "data_size": 65536 00:15:07.439 }, 00:15:07.439 { 00:15:07.439 "name": "BaseBdev3", 00:15:07.439 "uuid": "873b56a2-6756-4fc3-ab29-fdec960bc1a7", 00:15:07.439 "is_configured": true, 00:15:07.439 "data_offset": 0, 00:15:07.439 "data_size": 65536 00:15:07.439 } 00:15:07.439 ] 00:15:07.439 } 00:15:07.439 } 00:15:07.439 }' 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:07.439 BaseBdev2 00:15:07.439 BaseBdev3' 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.439 15:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.440 15:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.440 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.440 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.440 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.440 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.440 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:07.440 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.440 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.440 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.440 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.440 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.440 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.440 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.440 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:07.440 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.440 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.440 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.440 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.700 [2024-11-25 15:42:06.126940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.700 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.700 "name": "Existed_Raid", 00:15:07.701 "uuid": "908ea42e-6527-46fc-8453-6332e92d0d6b", 00:15:07.701 "strip_size_kb": 64, 00:15:07.701 "state": "online", 00:15:07.701 "raid_level": "raid5f", 00:15:07.701 "superblock": false, 00:15:07.701 "num_base_bdevs": 3, 00:15:07.701 "num_base_bdevs_discovered": 2, 00:15:07.701 "num_base_bdevs_operational": 2, 00:15:07.701 "base_bdevs_list": [ 00:15:07.701 { 00:15:07.701 "name": null, 00:15:07.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.701 "is_configured": false, 00:15:07.701 "data_offset": 0, 00:15:07.701 "data_size": 65536 00:15:07.701 }, 00:15:07.701 { 00:15:07.701 "name": "BaseBdev2", 00:15:07.701 "uuid": "79a088b7-bda4-4764-b8ec-61875a3a7a5d", 00:15:07.701 "is_configured": true, 00:15:07.701 "data_offset": 0, 00:15:07.701 "data_size": 65536 00:15:07.701 }, 00:15:07.701 { 00:15:07.701 "name": "BaseBdev3", 00:15:07.701 "uuid": "873b56a2-6756-4fc3-ab29-fdec960bc1a7", 00:15:07.701 "is_configured": true, 00:15:07.701 "data_offset": 0, 00:15:07.701 "data_size": 65536 00:15:07.701 } 00:15:07.701 ] 00:15:07.701 }' 00:15:07.701 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.701 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.269 [2024-11-25 15:42:06.706378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:08.269 [2024-11-25 15:42:06.706479] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.269 [2024-11-25 15:42:06.795886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.269 [2024-11-25 15:42:06.855818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:08.269 [2024-11-25 15:42:06.855912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:08.269 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:08.530 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:08.530 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.530 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.530 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.530 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.530 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:08.530 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:08.530 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:08.530 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:08.530 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:08.530 15:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:08.530 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.530 15:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.530 BaseBdev2 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.530 [ 00:15:08.530 { 00:15:08.530 "name": "BaseBdev2", 00:15:08.530 "aliases": [ 00:15:08.530 "997369f3-a978-44f6-9916-0566016e3545" 00:15:08.530 ], 00:15:08.530 "product_name": "Malloc disk", 00:15:08.530 "block_size": 512, 00:15:08.530 "num_blocks": 65536, 00:15:08.530 "uuid": "997369f3-a978-44f6-9916-0566016e3545", 00:15:08.530 "assigned_rate_limits": { 00:15:08.530 "rw_ios_per_sec": 0, 00:15:08.530 "rw_mbytes_per_sec": 0, 00:15:08.530 "r_mbytes_per_sec": 0, 00:15:08.530 "w_mbytes_per_sec": 0 00:15:08.530 }, 00:15:08.530 "claimed": false, 00:15:08.530 "zoned": false, 00:15:08.530 "supported_io_types": { 00:15:08.530 "read": true, 00:15:08.530 "write": true, 00:15:08.530 "unmap": true, 00:15:08.530 "flush": true, 00:15:08.530 "reset": true, 00:15:08.530 "nvme_admin": false, 00:15:08.530 "nvme_io": false, 00:15:08.530 "nvme_io_md": false, 00:15:08.530 "write_zeroes": true, 00:15:08.530 "zcopy": true, 00:15:08.530 "get_zone_info": false, 00:15:08.530 "zone_management": false, 00:15:08.530 "zone_append": false, 00:15:08.530 "compare": false, 00:15:08.530 "compare_and_write": false, 00:15:08.530 "abort": true, 00:15:08.530 "seek_hole": false, 00:15:08.530 "seek_data": false, 00:15:08.530 "copy": true, 00:15:08.530 "nvme_iov_md": false 00:15:08.530 }, 00:15:08.530 "memory_domains": [ 00:15:08.530 { 00:15:08.530 "dma_device_id": "system", 00:15:08.530 "dma_device_type": 1 00:15:08.530 }, 00:15:08.530 { 00:15:08.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.530 "dma_device_type": 2 00:15:08.530 } 00:15:08.530 ], 00:15:08.530 "driver_specific": {} 00:15:08.530 } 00:15:08.530 ] 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.530 BaseBdev3 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:08.530 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.531 [ 00:15:08.531 { 00:15:08.531 "name": "BaseBdev3", 00:15:08.531 "aliases": [ 00:15:08.531 "010dd995-5828-432c-9f2c-a9869ebf3ba1" 00:15:08.531 ], 00:15:08.531 "product_name": "Malloc disk", 00:15:08.531 "block_size": 512, 00:15:08.531 "num_blocks": 65536, 00:15:08.531 "uuid": "010dd995-5828-432c-9f2c-a9869ebf3ba1", 00:15:08.531 "assigned_rate_limits": { 00:15:08.531 "rw_ios_per_sec": 0, 00:15:08.531 "rw_mbytes_per_sec": 0, 00:15:08.531 "r_mbytes_per_sec": 0, 00:15:08.531 "w_mbytes_per_sec": 0 00:15:08.531 }, 00:15:08.531 "claimed": false, 00:15:08.531 "zoned": false, 00:15:08.531 "supported_io_types": { 00:15:08.531 "read": true, 00:15:08.531 "write": true, 00:15:08.531 "unmap": true, 00:15:08.531 "flush": true, 00:15:08.531 "reset": true, 00:15:08.531 "nvme_admin": false, 00:15:08.531 "nvme_io": false, 00:15:08.531 "nvme_io_md": false, 00:15:08.531 "write_zeroes": true, 00:15:08.531 "zcopy": true, 00:15:08.531 "get_zone_info": false, 00:15:08.531 "zone_management": false, 00:15:08.531 "zone_append": false, 00:15:08.531 "compare": false, 00:15:08.531 "compare_and_write": false, 00:15:08.531 "abort": true, 00:15:08.531 "seek_hole": false, 00:15:08.531 "seek_data": false, 00:15:08.531 "copy": true, 00:15:08.531 "nvme_iov_md": false 00:15:08.531 }, 00:15:08.531 "memory_domains": [ 00:15:08.531 { 00:15:08.531 "dma_device_id": "system", 00:15:08.531 "dma_device_type": 1 00:15:08.531 }, 00:15:08.531 { 00:15:08.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.531 "dma_device_type": 2 00:15:08.531 } 00:15:08.531 ], 00:15:08.531 "driver_specific": {} 00:15:08.531 } 00:15:08.531 ] 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.531 [2024-11-25 15:42:07.157926] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:08.531 [2024-11-25 15:42:07.158027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:08.531 [2024-11-25 15:42:07.158087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.531 [2024-11-25 15:42:07.159855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.531 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.791 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.791 "name": "Existed_Raid", 00:15:08.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.791 "strip_size_kb": 64, 00:15:08.791 "state": "configuring", 00:15:08.792 "raid_level": "raid5f", 00:15:08.792 "superblock": false, 00:15:08.792 "num_base_bdevs": 3, 00:15:08.792 "num_base_bdevs_discovered": 2, 00:15:08.792 "num_base_bdevs_operational": 3, 00:15:08.792 "base_bdevs_list": [ 00:15:08.792 { 00:15:08.792 "name": "BaseBdev1", 00:15:08.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.792 "is_configured": false, 00:15:08.792 "data_offset": 0, 00:15:08.792 "data_size": 0 00:15:08.792 }, 00:15:08.792 { 00:15:08.792 "name": "BaseBdev2", 00:15:08.792 "uuid": "997369f3-a978-44f6-9916-0566016e3545", 00:15:08.792 "is_configured": true, 00:15:08.792 "data_offset": 0, 00:15:08.792 "data_size": 65536 00:15:08.792 }, 00:15:08.792 { 00:15:08.792 "name": "BaseBdev3", 00:15:08.792 "uuid": "010dd995-5828-432c-9f2c-a9869ebf3ba1", 00:15:08.792 "is_configured": true, 00:15:08.792 "data_offset": 0, 00:15:08.792 "data_size": 65536 00:15:08.792 } 00:15:08.792 ] 00:15:08.792 }' 00:15:08.792 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.792 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.052 [2024-11-25 15:42:07.573185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.052 "name": "Existed_Raid", 00:15:09.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.052 "strip_size_kb": 64, 00:15:09.052 "state": "configuring", 00:15:09.052 "raid_level": "raid5f", 00:15:09.052 "superblock": false, 00:15:09.052 "num_base_bdevs": 3, 00:15:09.052 "num_base_bdevs_discovered": 1, 00:15:09.052 "num_base_bdevs_operational": 3, 00:15:09.052 "base_bdevs_list": [ 00:15:09.052 { 00:15:09.052 "name": "BaseBdev1", 00:15:09.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.052 "is_configured": false, 00:15:09.052 "data_offset": 0, 00:15:09.052 "data_size": 0 00:15:09.052 }, 00:15:09.052 { 00:15:09.052 "name": null, 00:15:09.052 "uuid": "997369f3-a978-44f6-9916-0566016e3545", 00:15:09.052 "is_configured": false, 00:15:09.052 "data_offset": 0, 00:15:09.052 "data_size": 65536 00:15:09.052 }, 00:15:09.052 { 00:15:09.052 "name": "BaseBdev3", 00:15:09.052 "uuid": "010dd995-5828-432c-9f2c-a9869ebf3ba1", 00:15:09.052 "is_configured": true, 00:15:09.052 "data_offset": 0, 00:15:09.052 "data_size": 65536 00:15:09.052 } 00:15:09.052 ] 00:15:09.052 }' 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.052 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.312 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:09.312 15:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.312 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.312 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.582 15:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.582 [2024-11-25 15:42:08.047157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:09.582 BaseBdev1 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.582 [ 00:15:09.582 { 00:15:09.582 "name": "BaseBdev1", 00:15:09.582 "aliases": [ 00:15:09.582 "bfcbddbf-d2fd-475e-8f64-15c3f634f72b" 00:15:09.582 ], 00:15:09.582 "product_name": "Malloc disk", 00:15:09.582 "block_size": 512, 00:15:09.582 "num_blocks": 65536, 00:15:09.582 "uuid": "bfcbddbf-d2fd-475e-8f64-15c3f634f72b", 00:15:09.582 "assigned_rate_limits": { 00:15:09.582 "rw_ios_per_sec": 0, 00:15:09.582 "rw_mbytes_per_sec": 0, 00:15:09.582 "r_mbytes_per_sec": 0, 00:15:09.582 "w_mbytes_per_sec": 0 00:15:09.582 }, 00:15:09.582 "claimed": true, 00:15:09.582 "claim_type": "exclusive_write", 00:15:09.582 "zoned": false, 00:15:09.582 "supported_io_types": { 00:15:09.582 "read": true, 00:15:09.582 "write": true, 00:15:09.582 "unmap": true, 00:15:09.582 "flush": true, 00:15:09.582 "reset": true, 00:15:09.582 "nvme_admin": false, 00:15:09.582 "nvme_io": false, 00:15:09.582 "nvme_io_md": false, 00:15:09.582 "write_zeroes": true, 00:15:09.582 "zcopy": true, 00:15:09.582 "get_zone_info": false, 00:15:09.582 "zone_management": false, 00:15:09.582 "zone_append": false, 00:15:09.582 "compare": false, 00:15:09.582 "compare_and_write": false, 00:15:09.582 "abort": true, 00:15:09.582 "seek_hole": false, 00:15:09.582 "seek_data": false, 00:15:09.582 "copy": true, 00:15:09.582 "nvme_iov_md": false 00:15:09.582 }, 00:15:09.582 "memory_domains": [ 00:15:09.582 { 00:15:09.582 "dma_device_id": "system", 00:15:09.582 "dma_device_type": 1 00:15:09.582 }, 00:15:09.582 { 00:15:09.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.582 "dma_device_type": 2 00:15:09.582 } 00:15:09.582 ], 00:15:09.582 "driver_specific": {} 00:15:09.582 } 00:15:09.582 ] 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.582 "name": "Existed_Raid", 00:15:09.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.582 "strip_size_kb": 64, 00:15:09.582 "state": "configuring", 00:15:09.582 "raid_level": "raid5f", 00:15:09.582 "superblock": false, 00:15:09.582 "num_base_bdevs": 3, 00:15:09.582 "num_base_bdevs_discovered": 2, 00:15:09.582 "num_base_bdevs_operational": 3, 00:15:09.582 "base_bdevs_list": [ 00:15:09.582 { 00:15:09.582 "name": "BaseBdev1", 00:15:09.582 "uuid": "bfcbddbf-d2fd-475e-8f64-15c3f634f72b", 00:15:09.582 "is_configured": true, 00:15:09.582 "data_offset": 0, 00:15:09.582 "data_size": 65536 00:15:09.582 }, 00:15:09.582 { 00:15:09.582 "name": null, 00:15:09.582 "uuid": "997369f3-a978-44f6-9916-0566016e3545", 00:15:09.582 "is_configured": false, 00:15:09.582 "data_offset": 0, 00:15:09.582 "data_size": 65536 00:15:09.582 }, 00:15:09.582 { 00:15:09.582 "name": "BaseBdev3", 00:15:09.582 "uuid": "010dd995-5828-432c-9f2c-a9869ebf3ba1", 00:15:09.582 "is_configured": true, 00:15:09.582 "data_offset": 0, 00:15:09.582 "data_size": 65536 00:15:09.582 } 00:15:09.582 ] 00:15:09.582 }' 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.582 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.859 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:09.859 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.859 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.859 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.121 [2024-11-25 15:42:08.578257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.121 "name": "Existed_Raid", 00:15:10.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.121 "strip_size_kb": 64, 00:15:10.121 "state": "configuring", 00:15:10.121 "raid_level": "raid5f", 00:15:10.121 "superblock": false, 00:15:10.121 "num_base_bdevs": 3, 00:15:10.121 "num_base_bdevs_discovered": 1, 00:15:10.121 "num_base_bdevs_operational": 3, 00:15:10.121 "base_bdevs_list": [ 00:15:10.121 { 00:15:10.121 "name": "BaseBdev1", 00:15:10.121 "uuid": "bfcbddbf-d2fd-475e-8f64-15c3f634f72b", 00:15:10.121 "is_configured": true, 00:15:10.121 "data_offset": 0, 00:15:10.121 "data_size": 65536 00:15:10.121 }, 00:15:10.121 { 00:15:10.121 "name": null, 00:15:10.121 "uuid": "997369f3-a978-44f6-9916-0566016e3545", 00:15:10.121 "is_configured": false, 00:15:10.121 "data_offset": 0, 00:15:10.121 "data_size": 65536 00:15:10.121 }, 00:15:10.121 { 00:15:10.121 "name": null, 00:15:10.121 "uuid": "010dd995-5828-432c-9f2c-a9869ebf3ba1", 00:15:10.121 "is_configured": false, 00:15:10.121 "data_offset": 0, 00:15:10.121 "data_size": 65536 00:15:10.121 } 00:15:10.121 ] 00:15:10.121 }' 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.121 15:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.381 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:10.381 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.381 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.381 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.641 [2024-11-25 15:42:09.089426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.641 "name": "Existed_Raid", 00:15:10.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.641 "strip_size_kb": 64, 00:15:10.641 "state": "configuring", 00:15:10.641 "raid_level": "raid5f", 00:15:10.641 "superblock": false, 00:15:10.641 "num_base_bdevs": 3, 00:15:10.641 "num_base_bdevs_discovered": 2, 00:15:10.641 "num_base_bdevs_operational": 3, 00:15:10.641 "base_bdevs_list": [ 00:15:10.641 { 00:15:10.641 "name": "BaseBdev1", 00:15:10.641 "uuid": "bfcbddbf-d2fd-475e-8f64-15c3f634f72b", 00:15:10.641 "is_configured": true, 00:15:10.641 "data_offset": 0, 00:15:10.641 "data_size": 65536 00:15:10.641 }, 00:15:10.641 { 00:15:10.641 "name": null, 00:15:10.641 "uuid": "997369f3-a978-44f6-9916-0566016e3545", 00:15:10.641 "is_configured": false, 00:15:10.641 "data_offset": 0, 00:15:10.641 "data_size": 65536 00:15:10.641 }, 00:15:10.641 { 00:15:10.641 "name": "BaseBdev3", 00:15:10.641 "uuid": "010dd995-5828-432c-9f2c-a9869ebf3ba1", 00:15:10.641 "is_configured": true, 00:15:10.641 "data_offset": 0, 00:15:10.641 "data_size": 65536 00:15:10.641 } 00:15:10.641 ] 00:15:10.641 }' 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.641 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.901 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.901 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.901 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.901 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:10.901 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.901 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:10.901 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:10.901 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.901 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.901 [2024-11-25 15:42:09.564610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:11.160 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.160 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:11.160 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.160 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.160 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.160 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.160 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.160 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.160 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.160 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.160 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.160 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.160 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.160 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.160 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.160 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.160 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.160 "name": "Existed_Raid", 00:15:11.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.160 "strip_size_kb": 64, 00:15:11.160 "state": "configuring", 00:15:11.160 "raid_level": "raid5f", 00:15:11.160 "superblock": false, 00:15:11.160 "num_base_bdevs": 3, 00:15:11.160 "num_base_bdevs_discovered": 1, 00:15:11.160 "num_base_bdevs_operational": 3, 00:15:11.160 "base_bdevs_list": [ 00:15:11.160 { 00:15:11.160 "name": null, 00:15:11.160 "uuid": "bfcbddbf-d2fd-475e-8f64-15c3f634f72b", 00:15:11.160 "is_configured": false, 00:15:11.160 "data_offset": 0, 00:15:11.160 "data_size": 65536 00:15:11.160 }, 00:15:11.160 { 00:15:11.160 "name": null, 00:15:11.160 "uuid": "997369f3-a978-44f6-9916-0566016e3545", 00:15:11.160 "is_configured": false, 00:15:11.160 "data_offset": 0, 00:15:11.160 "data_size": 65536 00:15:11.160 }, 00:15:11.160 { 00:15:11.160 "name": "BaseBdev3", 00:15:11.160 "uuid": "010dd995-5828-432c-9f2c-a9869ebf3ba1", 00:15:11.160 "is_configured": true, 00:15:11.160 "data_offset": 0, 00:15:11.160 "data_size": 65536 00:15:11.160 } 00:15:11.160 ] 00:15:11.160 }' 00:15:11.160 15:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.160 15:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.419 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.419 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:11.419 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.419 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.419 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.419 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:11.419 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:11.419 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.419 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.419 [2024-11-25 15:42:10.093506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:11.679 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.679 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:11.679 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.679 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.679 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.679 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.679 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.679 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.679 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.679 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.679 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.679 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.679 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.679 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.679 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.679 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.679 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.679 "name": "Existed_Raid", 00:15:11.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.679 "strip_size_kb": 64, 00:15:11.679 "state": "configuring", 00:15:11.679 "raid_level": "raid5f", 00:15:11.679 "superblock": false, 00:15:11.679 "num_base_bdevs": 3, 00:15:11.679 "num_base_bdevs_discovered": 2, 00:15:11.679 "num_base_bdevs_operational": 3, 00:15:11.679 "base_bdevs_list": [ 00:15:11.679 { 00:15:11.679 "name": null, 00:15:11.679 "uuid": "bfcbddbf-d2fd-475e-8f64-15c3f634f72b", 00:15:11.679 "is_configured": false, 00:15:11.679 "data_offset": 0, 00:15:11.679 "data_size": 65536 00:15:11.679 }, 00:15:11.679 { 00:15:11.679 "name": "BaseBdev2", 00:15:11.679 "uuid": "997369f3-a978-44f6-9916-0566016e3545", 00:15:11.679 "is_configured": true, 00:15:11.679 "data_offset": 0, 00:15:11.679 "data_size": 65536 00:15:11.679 }, 00:15:11.679 { 00:15:11.679 "name": "BaseBdev3", 00:15:11.679 "uuid": "010dd995-5828-432c-9f2c-a9869ebf3ba1", 00:15:11.679 "is_configured": true, 00:15:11.679 "data_offset": 0, 00:15:11.679 "data_size": 65536 00:15:11.679 } 00:15:11.679 ] 00:15:11.679 }' 00:15:11.679 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.679 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.943 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:11.943 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.943 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.943 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.943 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.943 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:11.943 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:11.943 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.943 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.943 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.943 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.943 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bfcbddbf-d2fd-475e-8f64-15c3f634f72b 00:15:11.943 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.943 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.943 [2024-11-25 15:42:10.585248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:11.943 [2024-11-25 15:42:10.585352] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:11.943 [2024-11-25 15:42:10.585381] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:11.943 [2024-11-25 15:42:10.585666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:11.943 [2024-11-25 15:42:10.591350] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:11.944 [2024-11-25 15:42:10.591406] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:11.944 [2024-11-25 15:42:10.591714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.944 NewBaseBdev 00:15:11.944 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.944 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:11.944 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:11.944 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:11.944 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:11.944 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:11.944 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:11.944 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:11.944 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.944 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.944 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.944 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:11.944 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.944 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.944 [ 00:15:11.944 { 00:15:11.944 "name": "NewBaseBdev", 00:15:11.944 "aliases": [ 00:15:11.944 "bfcbddbf-d2fd-475e-8f64-15c3f634f72b" 00:15:11.944 ], 00:15:11.944 "product_name": "Malloc disk", 00:15:11.944 "block_size": 512, 00:15:11.944 "num_blocks": 65536, 00:15:11.944 "uuid": "bfcbddbf-d2fd-475e-8f64-15c3f634f72b", 00:15:11.944 "assigned_rate_limits": { 00:15:11.944 "rw_ios_per_sec": 0, 00:15:11.944 "rw_mbytes_per_sec": 0, 00:15:11.944 "r_mbytes_per_sec": 0, 00:15:11.944 "w_mbytes_per_sec": 0 00:15:11.944 }, 00:15:11.944 "claimed": true, 00:15:11.944 "claim_type": "exclusive_write", 00:15:11.944 "zoned": false, 00:15:11.944 "supported_io_types": { 00:15:11.944 "read": true, 00:15:11.944 "write": true, 00:15:12.206 "unmap": true, 00:15:12.206 "flush": true, 00:15:12.206 "reset": true, 00:15:12.206 "nvme_admin": false, 00:15:12.206 "nvme_io": false, 00:15:12.206 "nvme_io_md": false, 00:15:12.206 "write_zeroes": true, 00:15:12.206 "zcopy": true, 00:15:12.206 "get_zone_info": false, 00:15:12.206 "zone_management": false, 00:15:12.206 "zone_append": false, 00:15:12.206 "compare": false, 00:15:12.206 "compare_and_write": false, 00:15:12.206 "abort": true, 00:15:12.206 "seek_hole": false, 00:15:12.206 "seek_data": false, 00:15:12.206 "copy": true, 00:15:12.206 "nvme_iov_md": false 00:15:12.206 }, 00:15:12.206 "memory_domains": [ 00:15:12.206 { 00:15:12.206 "dma_device_id": "system", 00:15:12.206 "dma_device_type": 1 00:15:12.206 }, 00:15:12.206 { 00:15:12.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.206 "dma_device_type": 2 00:15:12.206 } 00:15:12.206 ], 00:15:12.206 "driver_specific": {} 00:15:12.206 } 00:15:12.206 ] 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.206 "name": "Existed_Raid", 00:15:12.206 "uuid": "88758430-34d0-448c-810f-10c47d3ac4da", 00:15:12.206 "strip_size_kb": 64, 00:15:12.206 "state": "online", 00:15:12.206 "raid_level": "raid5f", 00:15:12.206 "superblock": false, 00:15:12.206 "num_base_bdevs": 3, 00:15:12.206 "num_base_bdevs_discovered": 3, 00:15:12.206 "num_base_bdevs_operational": 3, 00:15:12.206 "base_bdevs_list": [ 00:15:12.206 { 00:15:12.206 "name": "NewBaseBdev", 00:15:12.206 "uuid": "bfcbddbf-d2fd-475e-8f64-15c3f634f72b", 00:15:12.206 "is_configured": true, 00:15:12.206 "data_offset": 0, 00:15:12.206 "data_size": 65536 00:15:12.206 }, 00:15:12.206 { 00:15:12.206 "name": "BaseBdev2", 00:15:12.206 "uuid": "997369f3-a978-44f6-9916-0566016e3545", 00:15:12.206 "is_configured": true, 00:15:12.206 "data_offset": 0, 00:15:12.206 "data_size": 65536 00:15:12.206 }, 00:15:12.206 { 00:15:12.206 "name": "BaseBdev3", 00:15:12.206 "uuid": "010dd995-5828-432c-9f2c-a9869ebf3ba1", 00:15:12.206 "is_configured": true, 00:15:12.206 "data_offset": 0, 00:15:12.206 "data_size": 65536 00:15:12.206 } 00:15:12.206 ] 00:15:12.206 }' 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.206 15:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.465 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:12.465 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:12.465 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:12.465 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:12.465 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:12.465 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:12.465 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:12.465 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:12.465 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.465 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.465 [2024-11-25 15:42:11.101448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.465 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.465 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:12.465 "name": "Existed_Raid", 00:15:12.465 "aliases": [ 00:15:12.465 "88758430-34d0-448c-810f-10c47d3ac4da" 00:15:12.465 ], 00:15:12.465 "product_name": "Raid Volume", 00:15:12.465 "block_size": 512, 00:15:12.465 "num_blocks": 131072, 00:15:12.465 "uuid": "88758430-34d0-448c-810f-10c47d3ac4da", 00:15:12.465 "assigned_rate_limits": { 00:15:12.465 "rw_ios_per_sec": 0, 00:15:12.465 "rw_mbytes_per_sec": 0, 00:15:12.465 "r_mbytes_per_sec": 0, 00:15:12.465 "w_mbytes_per_sec": 0 00:15:12.465 }, 00:15:12.465 "claimed": false, 00:15:12.465 "zoned": false, 00:15:12.465 "supported_io_types": { 00:15:12.465 "read": true, 00:15:12.465 "write": true, 00:15:12.465 "unmap": false, 00:15:12.465 "flush": false, 00:15:12.465 "reset": true, 00:15:12.465 "nvme_admin": false, 00:15:12.465 "nvme_io": false, 00:15:12.465 "nvme_io_md": false, 00:15:12.465 "write_zeroes": true, 00:15:12.465 "zcopy": false, 00:15:12.465 "get_zone_info": false, 00:15:12.465 "zone_management": false, 00:15:12.465 "zone_append": false, 00:15:12.465 "compare": false, 00:15:12.465 "compare_and_write": false, 00:15:12.465 "abort": false, 00:15:12.465 "seek_hole": false, 00:15:12.465 "seek_data": false, 00:15:12.465 "copy": false, 00:15:12.465 "nvme_iov_md": false 00:15:12.465 }, 00:15:12.465 "driver_specific": { 00:15:12.465 "raid": { 00:15:12.465 "uuid": "88758430-34d0-448c-810f-10c47d3ac4da", 00:15:12.465 "strip_size_kb": 64, 00:15:12.466 "state": "online", 00:15:12.466 "raid_level": "raid5f", 00:15:12.466 "superblock": false, 00:15:12.466 "num_base_bdevs": 3, 00:15:12.466 "num_base_bdevs_discovered": 3, 00:15:12.466 "num_base_bdevs_operational": 3, 00:15:12.466 "base_bdevs_list": [ 00:15:12.466 { 00:15:12.466 "name": "NewBaseBdev", 00:15:12.466 "uuid": "bfcbddbf-d2fd-475e-8f64-15c3f634f72b", 00:15:12.466 "is_configured": true, 00:15:12.466 "data_offset": 0, 00:15:12.466 "data_size": 65536 00:15:12.466 }, 00:15:12.466 { 00:15:12.466 "name": "BaseBdev2", 00:15:12.466 "uuid": "997369f3-a978-44f6-9916-0566016e3545", 00:15:12.466 "is_configured": true, 00:15:12.466 "data_offset": 0, 00:15:12.466 "data_size": 65536 00:15:12.466 }, 00:15:12.466 { 00:15:12.466 "name": "BaseBdev3", 00:15:12.466 "uuid": "010dd995-5828-432c-9f2c-a9869ebf3ba1", 00:15:12.466 "is_configured": true, 00:15:12.466 "data_offset": 0, 00:15:12.466 "data_size": 65536 00:15:12.466 } 00:15:12.466 ] 00:15:12.466 } 00:15:12.466 } 00:15:12.466 }' 00:15:12.466 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:12.725 BaseBdev2 00:15:12.725 BaseBdev3' 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.725 [2024-11-25 15:42:11.396793] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:12.725 [2024-11-25 15:42:11.396858] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.725 [2024-11-25 15:42:11.396928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.725 [2024-11-25 15:42:11.397204] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.725 [2024-11-25 15:42:11.397219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.725 15:42:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79529 00:15:12.726 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79529 ']' 00:15:12.726 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79529 00:15:12.984 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:12.984 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.984 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79529 00:15:12.984 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.984 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.984 killing process with pid 79529 00:15:12.984 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79529' 00:15:12.984 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79529 00:15:12.984 [2024-11-25 15:42:11.430393] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.984 15:42:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79529 00:15:13.244 [2024-11-25 15:42:11.715397] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:14.183 15:42:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:14.183 00:15:14.183 real 0m10.248s 00:15:14.183 user 0m16.305s 00:15:14.183 sys 0m1.836s 00:15:14.183 15:42:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.183 15:42:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.183 ************************************ 00:15:14.183 END TEST raid5f_state_function_test 00:15:14.183 ************************************ 00:15:14.183 15:42:12 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:14.183 15:42:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:14.183 15:42:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.183 15:42:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:14.183 ************************************ 00:15:14.183 START TEST raid5f_state_function_test_sb 00:15:14.183 ************************************ 00:15:14.183 15:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:15:14.183 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80145 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:14.184 Process raid pid: 80145 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80145' 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80145 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80145 ']' 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.184 15:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.443 [2024-11-25 15:42:12.924921] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:15:14.443 [2024-11-25 15:42:12.925067] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.443 [2024-11-25 15:42:13.092912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.703 [2024-11-25 15:42:13.192401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.963 [2024-11-25 15:42:13.391481] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.963 [2024-11-25 15:42:13.391510] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:15.223 15:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.223 15:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:15.223 15:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:15.223 15:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.223 15:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.223 [2024-11-25 15:42:13.735243] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:15.223 [2024-11-25 15:42:13.735351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:15.223 [2024-11-25 15:42:13.735366] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:15.223 [2024-11-25 15:42:13.735376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:15.223 [2024-11-25 15:42:13.735382] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:15.223 [2024-11-25 15:42:13.735390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:15.223 15:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.223 15:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:15.223 15:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.224 15:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.224 15:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.224 15:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.224 15:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.224 15:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.224 15:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.224 15:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.224 15:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.224 15:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.224 15:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.224 15:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.224 15:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.224 15:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.224 15:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.224 "name": "Existed_Raid", 00:15:15.224 "uuid": "e4a9ab5f-bc50-4688-8724-432f72a566ef", 00:15:15.224 "strip_size_kb": 64, 00:15:15.224 "state": "configuring", 00:15:15.224 "raid_level": "raid5f", 00:15:15.224 "superblock": true, 00:15:15.224 "num_base_bdevs": 3, 00:15:15.224 "num_base_bdevs_discovered": 0, 00:15:15.224 "num_base_bdevs_operational": 3, 00:15:15.224 "base_bdevs_list": [ 00:15:15.224 { 00:15:15.224 "name": "BaseBdev1", 00:15:15.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.224 "is_configured": false, 00:15:15.224 "data_offset": 0, 00:15:15.224 "data_size": 0 00:15:15.224 }, 00:15:15.224 { 00:15:15.224 "name": "BaseBdev2", 00:15:15.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.224 "is_configured": false, 00:15:15.224 "data_offset": 0, 00:15:15.224 "data_size": 0 00:15:15.224 }, 00:15:15.224 { 00:15:15.224 "name": "BaseBdev3", 00:15:15.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.224 "is_configured": false, 00:15:15.224 "data_offset": 0, 00:15:15.224 "data_size": 0 00:15:15.224 } 00:15:15.224 ] 00:15:15.224 }' 00:15:15.224 15:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.224 15:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.794 [2024-11-25 15:42:14.178368] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:15.794 [2024-11-25 15:42:14.178441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.794 [2024-11-25 15:42:14.190366] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:15.794 [2024-11-25 15:42:14.190442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:15.794 [2024-11-25 15:42:14.190486] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:15.794 [2024-11-25 15:42:14.190508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:15.794 [2024-11-25 15:42:14.190525] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:15.794 [2024-11-25 15:42:14.190545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.794 [2024-11-25 15:42:14.236393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.794 BaseBdev1 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.794 [ 00:15:15.794 { 00:15:15.794 "name": "BaseBdev1", 00:15:15.794 "aliases": [ 00:15:15.794 "b82e38b5-f41d-4eac-8e67-921102fb83a4" 00:15:15.794 ], 00:15:15.794 "product_name": "Malloc disk", 00:15:15.794 "block_size": 512, 00:15:15.794 "num_blocks": 65536, 00:15:15.794 "uuid": "b82e38b5-f41d-4eac-8e67-921102fb83a4", 00:15:15.794 "assigned_rate_limits": { 00:15:15.794 "rw_ios_per_sec": 0, 00:15:15.794 "rw_mbytes_per_sec": 0, 00:15:15.794 "r_mbytes_per_sec": 0, 00:15:15.794 "w_mbytes_per_sec": 0 00:15:15.794 }, 00:15:15.794 "claimed": true, 00:15:15.794 "claim_type": "exclusive_write", 00:15:15.794 "zoned": false, 00:15:15.794 "supported_io_types": { 00:15:15.794 "read": true, 00:15:15.794 "write": true, 00:15:15.794 "unmap": true, 00:15:15.794 "flush": true, 00:15:15.794 "reset": true, 00:15:15.794 "nvme_admin": false, 00:15:15.794 "nvme_io": false, 00:15:15.794 "nvme_io_md": false, 00:15:15.794 "write_zeroes": true, 00:15:15.794 "zcopy": true, 00:15:15.794 "get_zone_info": false, 00:15:15.794 "zone_management": false, 00:15:15.794 "zone_append": false, 00:15:15.794 "compare": false, 00:15:15.794 "compare_and_write": false, 00:15:15.794 "abort": true, 00:15:15.794 "seek_hole": false, 00:15:15.794 "seek_data": false, 00:15:15.794 "copy": true, 00:15:15.794 "nvme_iov_md": false 00:15:15.794 }, 00:15:15.794 "memory_domains": [ 00:15:15.794 { 00:15:15.794 "dma_device_id": "system", 00:15:15.794 "dma_device_type": 1 00:15:15.794 }, 00:15:15.794 { 00:15:15.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.794 "dma_device_type": 2 00:15:15.794 } 00:15:15.794 ], 00:15:15.794 "driver_specific": {} 00:15:15.794 } 00:15:15.794 ] 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.794 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.794 "name": "Existed_Raid", 00:15:15.794 "uuid": "4a001d6c-b429-452c-84d2-a8f8ae2ee654", 00:15:15.795 "strip_size_kb": 64, 00:15:15.795 "state": "configuring", 00:15:15.795 "raid_level": "raid5f", 00:15:15.795 "superblock": true, 00:15:15.795 "num_base_bdevs": 3, 00:15:15.795 "num_base_bdevs_discovered": 1, 00:15:15.795 "num_base_bdevs_operational": 3, 00:15:15.795 "base_bdevs_list": [ 00:15:15.795 { 00:15:15.795 "name": "BaseBdev1", 00:15:15.795 "uuid": "b82e38b5-f41d-4eac-8e67-921102fb83a4", 00:15:15.795 "is_configured": true, 00:15:15.795 "data_offset": 2048, 00:15:15.795 "data_size": 63488 00:15:15.795 }, 00:15:15.795 { 00:15:15.795 "name": "BaseBdev2", 00:15:15.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.795 "is_configured": false, 00:15:15.795 "data_offset": 0, 00:15:15.795 "data_size": 0 00:15:15.795 }, 00:15:15.795 { 00:15:15.795 "name": "BaseBdev3", 00:15:15.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.795 "is_configured": false, 00:15:15.795 "data_offset": 0, 00:15:15.795 "data_size": 0 00:15:15.795 } 00:15:15.795 ] 00:15:15.795 }' 00:15:15.795 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.795 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.365 [2024-11-25 15:42:14.759527] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:16.365 [2024-11-25 15:42:14.759572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.365 [2024-11-25 15:42:14.771558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:16.365 [2024-11-25 15:42:14.773264] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:16.365 [2024-11-25 15:42:14.773305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:16.365 [2024-11-25 15:42:14.773315] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:16.365 [2024-11-25 15:42:14.773324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.365 "name": "Existed_Raid", 00:15:16.365 "uuid": "f5585f5f-ffc3-4df2-8f9b-472b8b8c005e", 00:15:16.365 "strip_size_kb": 64, 00:15:16.365 "state": "configuring", 00:15:16.365 "raid_level": "raid5f", 00:15:16.365 "superblock": true, 00:15:16.365 "num_base_bdevs": 3, 00:15:16.365 "num_base_bdevs_discovered": 1, 00:15:16.365 "num_base_bdevs_operational": 3, 00:15:16.365 "base_bdevs_list": [ 00:15:16.365 { 00:15:16.365 "name": "BaseBdev1", 00:15:16.365 "uuid": "b82e38b5-f41d-4eac-8e67-921102fb83a4", 00:15:16.365 "is_configured": true, 00:15:16.365 "data_offset": 2048, 00:15:16.365 "data_size": 63488 00:15:16.365 }, 00:15:16.365 { 00:15:16.365 "name": "BaseBdev2", 00:15:16.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.365 "is_configured": false, 00:15:16.365 "data_offset": 0, 00:15:16.365 "data_size": 0 00:15:16.365 }, 00:15:16.365 { 00:15:16.365 "name": "BaseBdev3", 00:15:16.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.365 "is_configured": false, 00:15:16.365 "data_offset": 0, 00:15:16.365 "data_size": 0 00:15:16.365 } 00:15:16.365 ] 00:15:16.365 }' 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.365 15:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.625 [2024-11-25 15:42:15.206823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.625 BaseBdev2 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.625 [ 00:15:16.625 { 00:15:16.625 "name": "BaseBdev2", 00:15:16.625 "aliases": [ 00:15:16.625 "ccb47e33-afc3-4003-becc-511ac5d89baa" 00:15:16.625 ], 00:15:16.625 "product_name": "Malloc disk", 00:15:16.625 "block_size": 512, 00:15:16.625 "num_blocks": 65536, 00:15:16.625 "uuid": "ccb47e33-afc3-4003-becc-511ac5d89baa", 00:15:16.625 "assigned_rate_limits": { 00:15:16.625 "rw_ios_per_sec": 0, 00:15:16.625 "rw_mbytes_per_sec": 0, 00:15:16.625 "r_mbytes_per_sec": 0, 00:15:16.625 "w_mbytes_per_sec": 0 00:15:16.625 }, 00:15:16.625 "claimed": true, 00:15:16.625 "claim_type": "exclusive_write", 00:15:16.625 "zoned": false, 00:15:16.625 "supported_io_types": { 00:15:16.625 "read": true, 00:15:16.625 "write": true, 00:15:16.625 "unmap": true, 00:15:16.625 "flush": true, 00:15:16.625 "reset": true, 00:15:16.625 "nvme_admin": false, 00:15:16.625 "nvme_io": false, 00:15:16.625 "nvme_io_md": false, 00:15:16.625 "write_zeroes": true, 00:15:16.625 "zcopy": true, 00:15:16.625 "get_zone_info": false, 00:15:16.625 "zone_management": false, 00:15:16.625 "zone_append": false, 00:15:16.625 "compare": false, 00:15:16.625 "compare_and_write": false, 00:15:16.625 "abort": true, 00:15:16.625 "seek_hole": false, 00:15:16.625 "seek_data": false, 00:15:16.625 "copy": true, 00:15:16.625 "nvme_iov_md": false 00:15:16.625 }, 00:15:16.625 "memory_domains": [ 00:15:16.625 { 00:15:16.625 "dma_device_id": "system", 00:15:16.625 "dma_device_type": 1 00:15:16.625 }, 00:15:16.625 { 00:15:16.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.625 "dma_device_type": 2 00:15:16.625 } 00:15:16.625 ], 00:15:16.625 "driver_specific": {} 00:15:16.625 } 00:15:16.625 ] 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.625 "name": "Existed_Raid", 00:15:16.625 "uuid": "f5585f5f-ffc3-4df2-8f9b-472b8b8c005e", 00:15:16.625 "strip_size_kb": 64, 00:15:16.625 "state": "configuring", 00:15:16.625 "raid_level": "raid5f", 00:15:16.625 "superblock": true, 00:15:16.625 "num_base_bdevs": 3, 00:15:16.625 "num_base_bdevs_discovered": 2, 00:15:16.625 "num_base_bdevs_operational": 3, 00:15:16.625 "base_bdevs_list": [ 00:15:16.625 { 00:15:16.625 "name": "BaseBdev1", 00:15:16.625 "uuid": "b82e38b5-f41d-4eac-8e67-921102fb83a4", 00:15:16.625 "is_configured": true, 00:15:16.625 "data_offset": 2048, 00:15:16.625 "data_size": 63488 00:15:16.625 }, 00:15:16.625 { 00:15:16.625 "name": "BaseBdev2", 00:15:16.625 "uuid": "ccb47e33-afc3-4003-becc-511ac5d89baa", 00:15:16.625 "is_configured": true, 00:15:16.625 "data_offset": 2048, 00:15:16.625 "data_size": 63488 00:15:16.625 }, 00:15:16.625 { 00:15:16.625 "name": "BaseBdev3", 00:15:16.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.625 "is_configured": false, 00:15:16.625 "data_offset": 0, 00:15:16.625 "data_size": 0 00:15:16.625 } 00:15:16.625 ] 00:15:16.625 }' 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.625 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.193 [2024-11-25 15:42:15.781468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:17.193 [2024-11-25 15:42:15.781711] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:17.193 [2024-11-25 15:42:15.781732] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:17.193 [2024-11-25 15:42:15.781971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:17.193 BaseBdev3 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.193 [2024-11-25 15:42:15.787566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:17.193 [2024-11-25 15:42:15.787640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:17.193 [2024-11-25 15:42:15.787900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.193 [ 00:15:17.193 { 00:15:17.193 "name": "BaseBdev3", 00:15:17.193 "aliases": [ 00:15:17.193 "4adc095a-df9b-4dc7-a058-90a271cb419f" 00:15:17.193 ], 00:15:17.193 "product_name": "Malloc disk", 00:15:17.193 "block_size": 512, 00:15:17.193 "num_blocks": 65536, 00:15:17.193 "uuid": "4adc095a-df9b-4dc7-a058-90a271cb419f", 00:15:17.193 "assigned_rate_limits": { 00:15:17.193 "rw_ios_per_sec": 0, 00:15:17.193 "rw_mbytes_per_sec": 0, 00:15:17.193 "r_mbytes_per_sec": 0, 00:15:17.193 "w_mbytes_per_sec": 0 00:15:17.193 }, 00:15:17.193 "claimed": true, 00:15:17.193 "claim_type": "exclusive_write", 00:15:17.193 "zoned": false, 00:15:17.193 "supported_io_types": { 00:15:17.193 "read": true, 00:15:17.193 "write": true, 00:15:17.193 "unmap": true, 00:15:17.193 "flush": true, 00:15:17.193 "reset": true, 00:15:17.193 "nvme_admin": false, 00:15:17.193 "nvme_io": false, 00:15:17.193 "nvme_io_md": false, 00:15:17.193 "write_zeroes": true, 00:15:17.193 "zcopy": true, 00:15:17.193 "get_zone_info": false, 00:15:17.193 "zone_management": false, 00:15:17.193 "zone_append": false, 00:15:17.193 "compare": false, 00:15:17.193 "compare_and_write": false, 00:15:17.193 "abort": true, 00:15:17.193 "seek_hole": false, 00:15:17.193 "seek_data": false, 00:15:17.193 "copy": true, 00:15:17.193 "nvme_iov_md": false 00:15:17.193 }, 00:15:17.193 "memory_domains": [ 00:15:17.193 { 00:15:17.193 "dma_device_id": "system", 00:15:17.193 "dma_device_type": 1 00:15:17.193 }, 00:15:17.193 { 00:15:17.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.193 "dma_device_type": 2 00:15:17.193 } 00:15:17.193 ], 00:15:17.193 "driver_specific": {} 00:15:17.193 } 00:15:17.193 ] 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.193 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.452 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.452 "name": "Existed_Raid", 00:15:17.452 "uuid": "f5585f5f-ffc3-4df2-8f9b-472b8b8c005e", 00:15:17.452 "strip_size_kb": 64, 00:15:17.452 "state": "online", 00:15:17.452 "raid_level": "raid5f", 00:15:17.452 "superblock": true, 00:15:17.452 "num_base_bdevs": 3, 00:15:17.452 "num_base_bdevs_discovered": 3, 00:15:17.452 "num_base_bdevs_operational": 3, 00:15:17.452 "base_bdevs_list": [ 00:15:17.452 { 00:15:17.452 "name": "BaseBdev1", 00:15:17.452 "uuid": "b82e38b5-f41d-4eac-8e67-921102fb83a4", 00:15:17.452 "is_configured": true, 00:15:17.452 "data_offset": 2048, 00:15:17.452 "data_size": 63488 00:15:17.452 }, 00:15:17.452 { 00:15:17.452 "name": "BaseBdev2", 00:15:17.452 "uuid": "ccb47e33-afc3-4003-becc-511ac5d89baa", 00:15:17.452 "is_configured": true, 00:15:17.452 "data_offset": 2048, 00:15:17.452 "data_size": 63488 00:15:17.452 }, 00:15:17.452 { 00:15:17.452 "name": "BaseBdev3", 00:15:17.452 "uuid": "4adc095a-df9b-4dc7-a058-90a271cb419f", 00:15:17.452 "is_configured": true, 00:15:17.452 "data_offset": 2048, 00:15:17.452 "data_size": 63488 00:15:17.452 } 00:15:17.452 ] 00:15:17.452 }' 00:15:17.452 15:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.452 15:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.711 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:17.711 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:17.711 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:17.711 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:17.711 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:17.711 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:17.711 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:17.711 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:17.711 15:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.711 15:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.711 [2024-11-25 15:42:16.265338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.711 15:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.711 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:17.711 "name": "Existed_Raid", 00:15:17.711 "aliases": [ 00:15:17.711 "f5585f5f-ffc3-4df2-8f9b-472b8b8c005e" 00:15:17.711 ], 00:15:17.711 "product_name": "Raid Volume", 00:15:17.711 "block_size": 512, 00:15:17.711 "num_blocks": 126976, 00:15:17.711 "uuid": "f5585f5f-ffc3-4df2-8f9b-472b8b8c005e", 00:15:17.711 "assigned_rate_limits": { 00:15:17.711 "rw_ios_per_sec": 0, 00:15:17.711 "rw_mbytes_per_sec": 0, 00:15:17.711 "r_mbytes_per_sec": 0, 00:15:17.711 "w_mbytes_per_sec": 0 00:15:17.711 }, 00:15:17.711 "claimed": false, 00:15:17.711 "zoned": false, 00:15:17.711 "supported_io_types": { 00:15:17.711 "read": true, 00:15:17.711 "write": true, 00:15:17.711 "unmap": false, 00:15:17.711 "flush": false, 00:15:17.711 "reset": true, 00:15:17.711 "nvme_admin": false, 00:15:17.711 "nvme_io": false, 00:15:17.711 "nvme_io_md": false, 00:15:17.711 "write_zeroes": true, 00:15:17.711 "zcopy": false, 00:15:17.711 "get_zone_info": false, 00:15:17.711 "zone_management": false, 00:15:17.711 "zone_append": false, 00:15:17.711 "compare": false, 00:15:17.711 "compare_and_write": false, 00:15:17.711 "abort": false, 00:15:17.711 "seek_hole": false, 00:15:17.711 "seek_data": false, 00:15:17.711 "copy": false, 00:15:17.711 "nvme_iov_md": false 00:15:17.711 }, 00:15:17.711 "driver_specific": { 00:15:17.711 "raid": { 00:15:17.711 "uuid": "f5585f5f-ffc3-4df2-8f9b-472b8b8c005e", 00:15:17.711 "strip_size_kb": 64, 00:15:17.711 "state": "online", 00:15:17.711 "raid_level": "raid5f", 00:15:17.711 "superblock": true, 00:15:17.711 "num_base_bdevs": 3, 00:15:17.711 "num_base_bdevs_discovered": 3, 00:15:17.711 "num_base_bdevs_operational": 3, 00:15:17.711 "base_bdevs_list": [ 00:15:17.711 { 00:15:17.711 "name": "BaseBdev1", 00:15:17.711 "uuid": "b82e38b5-f41d-4eac-8e67-921102fb83a4", 00:15:17.711 "is_configured": true, 00:15:17.711 "data_offset": 2048, 00:15:17.711 "data_size": 63488 00:15:17.711 }, 00:15:17.711 { 00:15:17.711 "name": "BaseBdev2", 00:15:17.711 "uuid": "ccb47e33-afc3-4003-becc-511ac5d89baa", 00:15:17.711 "is_configured": true, 00:15:17.711 "data_offset": 2048, 00:15:17.711 "data_size": 63488 00:15:17.711 }, 00:15:17.711 { 00:15:17.711 "name": "BaseBdev3", 00:15:17.711 "uuid": "4adc095a-df9b-4dc7-a058-90a271cb419f", 00:15:17.711 "is_configured": true, 00:15:17.711 "data_offset": 2048, 00:15:17.711 "data_size": 63488 00:15:17.711 } 00:15:17.711 ] 00:15:17.711 } 00:15:17.711 } 00:15:17.711 }' 00:15:17.711 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:17.711 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:17.711 BaseBdev2 00:15:17.711 BaseBdev3' 00:15:17.711 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.975 [2024-11-25 15:42:16.536723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.975 15:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.250 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.250 "name": "Existed_Raid", 00:15:18.250 "uuid": "f5585f5f-ffc3-4df2-8f9b-472b8b8c005e", 00:15:18.250 "strip_size_kb": 64, 00:15:18.250 "state": "online", 00:15:18.250 "raid_level": "raid5f", 00:15:18.250 "superblock": true, 00:15:18.250 "num_base_bdevs": 3, 00:15:18.250 "num_base_bdevs_discovered": 2, 00:15:18.250 "num_base_bdevs_operational": 2, 00:15:18.250 "base_bdevs_list": [ 00:15:18.250 { 00:15:18.250 "name": null, 00:15:18.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.250 "is_configured": false, 00:15:18.250 "data_offset": 0, 00:15:18.250 "data_size": 63488 00:15:18.250 }, 00:15:18.250 { 00:15:18.250 "name": "BaseBdev2", 00:15:18.250 "uuid": "ccb47e33-afc3-4003-becc-511ac5d89baa", 00:15:18.250 "is_configured": true, 00:15:18.250 "data_offset": 2048, 00:15:18.250 "data_size": 63488 00:15:18.250 }, 00:15:18.250 { 00:15:18.250 "name": "BaseBdev3", 00:15:18.250 "uuid": "4adc095a-df9b-4dc7-a058-90a271cb419f", 00:15:18.250 "is_configured": true, 00:15:18.250 "data_offset": 2048, 00:15:18.250 "data_size": 63488 00:15:18.250 } 00:15:18.250 ] 00:15:18.250 }' 00:15:18.250 15:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.250 15:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.526 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:18.526 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:18.526 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.526 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.526 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.526 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:18.526 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.526 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:18.526 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:18.526 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:18.526 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.526 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.526 [2024-11-25 15:42:17.123981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:18.526 [2024-11-25 15:42:17.124175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.787 [2024-11-25 15:42:17.212082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.787 [2024-11-25 15:42:17.267962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:18.787 [2024-11-25 15:42:17.268064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.787 BaseBdev2 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.787 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.787 [ 00:15:18.787 { 00:15:18.787 "name": "BaseBdev2", 00:15:18.787 "aliases": [ 00:15:18.787 "f6fcbe81-e6fd-4557-9519-4082daab6d60" 00:15:18.787 ], 00:15:18.787 "product_name": "Malloc disk", 00:15:18.787 "block_size": 512, 00:15:18.787 "num_blocks": 65536, 00:15:18.787 "uuid": "f6fcbe81-e6fd-4557-9519-4082daab6d60", 00:15:18.787 "assigned_rate_limits": { 00:15:18.787 "rw_ios_per_sec": 0, 00:15:18.787 "rw_mbytes_per_sec": 0, 00:15:18.787 "r_mbytes_per_sec": 0, 00:15:18.787 "w_mbytes_per_sec": 0 00:15:18.787 }, 00:15:18.787 "claimed": false, 00:15:18.787 "zoned": false, 00:15:18.787 "supported_io_types": { 00:15:18.787 "read": true, 00:15:18.787 "write": true, 00:15:18.787 "unmap": true, 00:15:18.787 "flush": true, 00:15:18.787 "reset": true, 00:15:18.787 "nvme_admin": false, 00:15:18.787 "nvme_io": false, 00:15:18.787 "nvme_io_md": false, 00:15:18.787 "write_zeroes": true, 00:15:18.787 "zcopy": true, 00:15:18.787 "get_zone_info": false, 00:15:18.787 "zone_management": false, 00:15:18.787 "zone_append": false, 00:15:18.787 "compare": false, 00:15:18.787 "compare_and_write": false, 00:15:18.787 "abort": true, 00:15:18.787 "seek_hole": false, 00:15:19.048 "seek_data": false, 00:15:19.048 "copy": true, 00:15:19.048 "nvme_iov_md": false 00:15:19.048 }, 00:15:19.048 "memory_domains": [ 00:15:19.048 { 00:15:19.048 "dma_device_id": "system", 00:15:19.048 "dma_device_type": 1 00:15:19.048 }, 00:15:19.048 { 00:15:19.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.048 "dma_device_type": 2 00:15:19.048 } 00:15:19.048 ], 00:15:19.048 "driver_specific": {} 00:15:19.048 } 00:15:19.048 ] 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.048 BaseBdev3 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.048 [ 00:15:19.048 { 00:15:19.048 "name": "BaseBdev3", 00:15:19.048 "aliases": [ 00:15:19.048 "c52b951a-f239-4e87-ba1e-ccb57010f5ed" 00:15:19.048 ], 00:15:19.048 "product_name": "Malloc disk", 00:15:19.048 "block_size": 512, 00:15:19.048 "num_blocks": 65536, 00:15:19.048 "uuid": "c52b951a-f239-4e87-ba1e-ccb57010f5ed", 00:15:19.048 "assigned_rate_limits": { 00:15:19.048 "rw_ios_per_sec": 0, 00:15:19.048 "rw_mbytes_per_sec": 0, 00:15:19.048 "r_mbytes_per_sec": 0, 00:15:19.048 "w_mbytes_per_sec": 0 00:15:19.048 }, 00:15:19.048 "claimed": false, 00:15:19.048 "zoned": false, 00:15:19.048 "supported_io_types": { 00:15:19.048 "read": true, 00:15:19.048 "write": true, 00:15:19.048 "unmap": true, 00:15:19.048 "flush": true, 00:15:19.048 "reset": true, 00:15:19.048 "nvme_admin": false, 00:15:19.048 "nvme_io": false, 00:15:19.048 "nvme_io_md": false, 00:15:19.048 "write_zeroes": true, 00:15:19.048 "zcopy": true, 00:15:19.048 "get_zone_info": false, 00:15:19.048 "zone_management": false, 00:15:19.048 "zone_append": false, 00:15:19.048 "compare": false, 00:15:19.048 "compare_and_write": false, 00:15:19.048 "abort": true, 00:15:19.048 "seek_hole": false, 00:15:19.048 "seek_data": false, 00:15:19.048 "copy": true, 00:15:19.048 "nvme_iov_md": false 00:15:19.048 }, 00:15:19.048 "memory_domains": [ 00:15:19.048 { 00:15:19.048 "dma_device_id": "system", 00:15:19.048 "dma_device_type": 1 00:15:19.048 }, 00:15:19.048 { 00:15:19.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.048 "dma_device_type": 2 00:15:19.048 } 00:15:19.048 ], 00:15:19.048 "driver_specific": {} 00:15:19.048 } 00:15:19.048 ] 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.048 [2024-11-25 15:42:17.552979] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.048 [2024-11-25 15:42:17.553077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.048 [2024-11-25 15:42:17.553122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:19.048 [2024-11-25 15:42:17.554851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.048 "name": "Existed_Raid", 00:15:19.048 "uuid": "24a7a368-9c3d-47e1-860e-c9544034bab8", 00:15:19.048 "strip_size_kb": 64, 00:15:19.048 "state": "configuring", 00:15:19.048 "raid_level": "raid5f", 00:15:19.048 "superblock": true, 00:15:19.048 "num_base_bdevs": 3, 00:15:19.048 "num_base_bdevs_discovered": 2, 00:15:19.048 "num_base_bdevs_operational": 3, 00:15:19.048 "base_bdevs_list": [ 00:15:19.048 { 00:15:19.048 "name": "BaseBdev1", 00:15:19.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.048 "is_configured": false, 00:15:19.048 "data_offset": 0, 00:15:19.048 "data_size": 0 00:15:19.048 }, 00:15:19.048 { 00:15:19.048 "name": "BaseBdev2", 00:15:19.048 "uuid": "f6fcbe81-e6fd-4557-9519-4082daab6d60", 00:15:19.048 "is_configured": true, 00:15:19.048 "data_offset": 2048, 00:15:19.048 "data_size": 63488 00:15:19.048 }, 00:15:19.048 { 00:15:19.048 "name": "BaseBdev3", 00:15:19.048 "uuid": "c52b951a-f239-4e87-ba1e-ccb57010f5ed", 00:15:19.048 "is_configured": true, 00:15:19.048 "data_offset": 2048, 00:15:19.048 "data_size": 63488 00:15:19.048 } 00:15:19.048 ] 00:15:19.048 }' 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.048 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.309 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:19.309 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.309 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.309 [2024-11-25 15:42:17.968266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:19.309 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.309 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:19.309 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.309 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.309 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.309 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.309 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.309 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.309 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.309 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.309 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.309 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.309 15:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.309 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.309 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.569 15:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.569 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.569 "name": "Existed_Raid", 00:15:19.569 "uuid": "24a7a368-9c3d-47e1-860e-c9544034bab8", 00:15:19.569 "strip_size_kb": 64, 00:15:19.569 "state": "configuring", 00:15:19.569 "raid_level": "raid5f", 00:15:19.569 "superblock": true, 00:15:19.569 "num_base_bdevs": 3, 00:15:19.569 "num_base_bdevs_discovered": 1, 00:15:19.569 "num_base_bdevs_operational": 3, 00:15:19.569 "base_bdevs_list": [ 00:15:19.569 { 00:15:19.569 "name": "BaseBdev1", 00:15:19.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.569 "is_configured": false, 00:15:19.569 "data_offset": 0, 00:15:19.569 "data_size": 0 00:15:19.569 }, 00:15:19.569 { 00:15:19.569 "name": null, 00:15:19.569 "uuid": "f6fcbe81-e6fd-4557-9519-4082daab6d60", 00:15:19.569 "is_configured": false, 00:15:19.569 "data_offset": 0, 00:15:19.569 "data_size": 63488 00:15:19.569 }, 00:15:19.569 { 00:15:19.569 "name": "BaseBdev3", 00:15:19.569 "uuid": "c52b951a-f239-4e87-ba1e-ccb57010f5ed", 00:15:19.569 "is_configured": true, 00:15:19.569 "data_offset": 2048, 00:15:19.569 "data_size": 63488 00:15:19.569 } 00:15:19.569 ] 00:15:19.569 }' 00:15:19.569 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.569 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.829 [2024-11-25 15:42:18.447490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.829 BaseBdev1 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.829 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:19.830 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.830 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.830 [ 00:15:19.830 { 00:15:19.830 "name": "BaseBdev1", 00:15:19.830 "aliases": [ 00:15:19.830 "3795464d-df38-4aaa-b7eb-267f1ae75e50" 00:15:19.830 ], 00:15:19.830 "product_name": "Malloc disk", 00:15:19.830 "block_size": 512, 00:15:19.830 "num_blocks": 65536, 00:15:19.830 "uuid": "3795464d-df38-4aaa-b7eb-267f1ae75e50", 00:15:19.830 "assigned_rate_limits": { 00:15:19.830 "rw_ios_per_sec": 0, 00:15:19.830 "rw_mbytes_per_sec": 0, 00:15:19.830 "r_mbytes_per_sec": 0, 00:15:19.830 "w_mbytes_per_sec": 0 00:15:19.830 }, 00:15:19.830 "claimed": true, 00:15:19.830 "claim_type": "exclusive_write", 00:15:19.830 "zoned": false, 00:15:19.830 "supported_io_types": { 00:15:19.830 "read": true, 00:15:19.830 "write": true, 00:15:19.830 "unmap": true, 00:15:19.830 "flush": true, 00:15:19.830 "reset": true, 00:15:19.830 "nvme_admin": false, 00:15:19.830 "nvme_io": false, 00:15:19.830 "nvme_io_md": false, 00:15:19.830 "write_zeroes": true, 00:15:19.830 "zcopy": true, 00:15:19.830 "get_zone_info": false, 00:15:19.830 "zone_management": false, 00:15:19.830 "zone_append": false, 00:15:19.830 "compare": false, 00:15:19.830 "compare_and_write": false, 00:15:19.830 "abort": true, 00:15:19.830 "seek_hole": false, 00:15:19.830 "seek_data": false, 00:15:19.830 "copy": true, 00:15:19.830 "nvme_iov_md": false 00:15:19.830 }, 00:15:19.830 "memory_domains": [ 00:15:19.830 { 00:15:19.830 "dma_device_id": "system", 00:15:19.830 "dma_device_type": 1 00:15:19.830 }, 00:15:19.830 { 00:15:19.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.830 "dma_device_type": 2 00:15:19.830 } 00:15:19.830 ], 00:15:19.830 "driver_specific": {} 00:15:19.830 } 00:15:19.830 ] 00:15:19.830 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.830 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:19.830 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:19.830 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.830 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.830 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.830 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.830 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.830 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.830 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.830 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.830 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.830 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.830 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.830 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.830 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.830 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.090 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.090 "name": "Existed_Raid", 00:15:20.090 "uuid": "24a7a368-9c3d-47e1-860e-c9544034bab8", 00:15:20.090 "strip_size_kb": 64, 00:15:20.090 "state": "configuring", 00:15:20.090 "raid_level": "raid5f", 00:15:20.090 "superblock": true, 00:15:20.090 "num_base_bdevs": 3, 00:15:20.090 "num_base_bdevs_discovered": 2, 00:15:20.090 "num_base_bdevs_operational": 3, 00:15:20.090 "base_bdevs_list": [ 00:15:20.090 { 00:15:20.090 "name": "BaseBdev1", 00:15:20.090 "uuid": "3795464d-df38-4aaa-b7eb-267f1ae75e50", 00:15:20.090 "is_configured": true, 00:15:20.090 "data_offset": 2048, 00:15:20.090 "data_size": 63488 00:15:20.090 }, 00:15:20.090 { 00:15:20.090 "name": null, 00:15:20.090 "uuid": "f6fcbe81-e6fd-4557-9519-4082daab6d60", 00:15:20.090 "is_configured": false, 00:15:20.090 "data_offset": 0, 00:15:20.090 "data_size": 63488 00:15:20.090 }, 00:15:20.090 { 00:15:20.090 "name": "BaseBdev3", 00:15:20.090 "uuid": "c52b951a-f239-4e87-ba1e-ccb57010f5ed", 00:15:20.090 "is_configured": true, 00:15:20.090 "data_offset": 2048, 00:15:20.090 "data_size": 63488 00:15:20.090 } 00:15:20.090 ] 00:15:20.090 }' 00:15:20.090 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.090 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.350 [2024-11-25 15:42:18.966737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.350 15:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.350 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.350 "name": "Existed_Raid", 00:15:20.350 "uuid": "24a7a368-9c3d-47e1-860e-c9544034bab8", 00:15:20.350 "strip_size_kb": 64, 00:15:20.350 "state": "configuring", 00:15:20.350 "raid_level": "raid5f", 00:15:20.350 "superblock": true, 00:15:20.351 "num_base_bdevs": 3, 00:15:20.351 "num_base_bdevs_discovered": 1, 00:15:20.351 "num_base_bdevs_operational": 3, 00:15:20.351 "base_bdevs_list": [ 00:15:20.351 { 00:15:20.351 "name": "BaseBdev1", 00:15:20.351 "uuid": "3795464d-df38-4aaa-b7eb-267f1ae75e50", 00:15:20.351 "is_configured": true, 00:15:20.351 "data_offset": 2048, 00:15:20.351 "data_size": 63488 00:15:20.351 }, 00:15:20.351 { 00:15:20.351 "name": null, 00:15:20.351 "uuid": "f6fcbe81-e6fd-4557-9519-4082daab6d60", 00:15:20.351 "is_configured": false, 00:15:20.351 "data_offset": 0, 00:15:20.351 "data_size": 63488 00:15:20.351 }, 00:15:20.351 { 00:15:20.351 "name": null, 00:15:20.351 "uuid": "c52b951a-f239-4e87-ba1e-ccb57010f5ed", 00:15:20.351 "is_configured": false, 00:15:20.351 "data_offset": 0, 00:15:20.351 "data_size": 63488 00:15:20.351 } 00:15:20.351 ] 00:15:20.351 }' 00:15:20.351 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.351 15:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.920 [2024-11-25 15:42:19.449928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.920 "name": "Existed_Raid", 00:15:20.920 "uuid": "24a7a368-9c3d-47e1-860e-c9544034bab8", 00:15:20.920 "strip_size_kb": 64, 00:15:20.920 "state": "configuring", 00:15:20.920 "raid_level": "raid5f", 00:15:20.920 "superblock": true, 00:15:20.920 "num_base_bdevs": 3, 00:15:20.920 "num_base_bdevs_discovered": 2, 00:15:20.920 "num_base_bdevs_operational": 3, 00:15:20.920 "base_bdevs_list": [ 00:15:20.920 { 00:15:20.920 "name": "BaseBdev1", 00:15:20.920 "uuid": "3795464d-df38-4aaa-b7eb-267f1ae75e50", 00:15:20.920 "is_configured": true, 00:15:20.920 "data_offset": 2048, 00:15:20.920 "data_size": 63488 00:15:20.920 }, 00:15:20.920 { 00:15:20.920 "name": null, 00:15:20.920 "uuid": "f6fcbe81-e6fd-4557-9519-4082daab6d60", 00:15:20.920 "is_configured": false, 00:15:20.920 "data_offset": 0, 00:15:20.920 "data_size": 63488 00:15:20.920 }, 00:15:20.920 { 00:15:20.920 "name": "BaseBdev3", 00:15:20.920 "uuid": "c52b951a-f239-4e87-ba1e-ccb57010f5ed", 00:15:20.920 "is_configured": true, 00:15:20.920 "data_offset": 2048, 00:15:20.920 "data_size": 63488 00:15:20.920 } 00:15:20.920 ] 00:15:20.920 }' 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.920 15:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.490 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.490 15:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.490 15:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.490 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:21.490 15:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.490 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:21.490 15:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:21.490 15:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.490 15:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.490 [2024-11-25 15:42:19.937100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:21.490 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.490 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:21.490 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.490 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.490 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.490 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.490 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.490 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.490 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.490 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.490 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.490 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.490 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.490 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.490 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.490 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.490 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.490 "name": "Existed_Raid", 00:15:21.490 "uuid": "24a7a368-9c3d-47e1-860e-c9544034bab8", 00:15:21.490 "strip_size_kb": 64, 00:15:21.490 "state": "configuring", 00:15:21.490 "raid_level": "raid5f", 00:15:21.490 "superblock": true, 00:15:21.490 "num_base_bdevs": 3, 00:15:21.490 "num_base_bdevs_discovered": 1, 00:15:21.490 "num_base_bdevs_operational": 3, 00:15:21.490 "base_bdevs_list": [ 00:15:21.490 { 00:15:21.490 "name": null, 00:15:21.490 "uuid": "3795464d-df38-4aaa-b7eb-267f1ae75e50", 00:15:21.490 "is_configured": false, 00:15:21.490 "data_offset": 0, 00:15:21.490 "data_size": 63488 00:15:21.490 }, 00:15:21.490 { 00:15:21.490 "name": null, 00:15:21.490 "uuid": "f6fcbe81-e6fd-4557-9519-4082daab6d60", 00:15:21.490 "is_configured": false, 00:15:21.490 "data_offset": 0, 00:15:21.490 "data_size": 63488 00:15:21.490 }, 00:15:21.490 { 00:15:21.490 "name": "BaseBdev3", 00:15:21.490 "uuid": "c52b951a-f239-4e87-ba1e-ccb57010f5ed", 00:15:21.490 "is_configured": true, 00:15:21.490 "data_offset": 2048, 00:15:21.490 "data_size": 63488 00:15:21.490 } 00:15:21.490 ] 00:15:21.490 }' 00:15:21.490 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.490 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.060 [2024-11-25 15:42:20.495746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.060 "name": "Existed_Raid", 00:15:22.060 "uuid": "24a7a368-9c3d-47e1-860e-c9544034bab8", 00:15:22.060 "strip_size_kb": 64, 00:15:22.060 "state": "configuring", 00:15:22.060 "raid_level": "raid5f", 00:15:22.060 "superblock": true, 00:15:22.060 "num_base_bdevs": 3, 00:15:22.060 "num_base_bdevs_discovered": 2, 00:15:22.060 "num_base_bdevs_operational": 3, 00:15:22.060 "base_bdevs_list": [ 00:15:22.060 { 00:15:22.060 "name": null, 00:15:22.060 "uuid": "3795464d-df38-4aaa-b7eb-267f1ae75e50", 00:15:22.060 "is_configured": false, 00:15:22.060 "data_offset": 0, 00:15:22.060 "data_size": 63488 00:15:22.060 }, 00:15:22.060 { 00:15:22.060 "name": "BaseBdev2", 00:15:22.060 "uuid": "f6fcbe81-e6fd-4557-9519-4082daab6d60", 00:15:22.060 "is_configured": true, 00:15:22.060 "data_offset": 2048, 00:15:22.060 "data_size": 63488 00:15:22.060 }, 00:15:22.060 { 00:15:22.060 "name": "BaseBdev3", 00:15:22.060 "uuid": "c52b951a-f239-4e87-ba1e-ccb57010f5ed", 00:15:22.060 "is_configured": true, 00:15:22.060 "data_offset": 2048, 00:15:22.060 "data_size": 63488 00:15:22.060 } 00:15:22.060 ] 00:15:22.060 }' 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.060 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.321 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.321 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:22.321 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.321 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.321 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.321 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:22.321 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:22.321 15:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.321 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.321 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.321 15:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.581 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3795464d-df38-4aaa-b7eb-267f1ae75e50 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.582 [2024-11-25 15:42:21.042743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:22.582 [2024-11-25 15:42:21.043032] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:22.582 [2024-11-25 15:42:21.043086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:22.582 [2024-11-25 15:42:21.043423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:22.582 NewBaseBdev 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.582 [2024-11-25 15:42:21.048801] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:22.582 [2024-11-25 15:42:21.048860] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:22.582 [2024-11-25 15:42:21.049053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.582 [ 00:15:22.582 { 00:15:22.582 "name": "NewBaseBdev", 00:15:22.582 "aliases": [ 00:15:22.582 "3795464d-df38-4aaa-b7eb-267f1ae75e50" 00:15:22.582 ], 00:15:22.582 "product_name": "Malloc disk", 00:15:22.582 "block_size": 512, 00:15:22.582 "num_blocks": 65536, 00:15:22.582 "uuid": "3795464d-df38-4aaa-b7eb-267f1ae75e50", 00:15:22.582 "assigned_rate_limits": { 00:15:22.582 "rw_ios_per_sec": 0, 00:15:22.582 "rw_mbytes_per_sec": 0, 00:15:22.582 "r_mbytes_per_sec": 0, 00:15:22.582 "w_mbytes_per_sec": 0 00:15:22.582 }, 00:15:22.582 "claimed": true, 00:15:22.582 "claim_type": "exclusive_write", 00:15:22.582 "zoned": false, 00:15:22.582 "supported_io_types": { 00:15:22.582 "read": true, 00:15:22.582 "write": true, 00:15:22.582 "unmap": true, 00:15:22.582 "flush": true, 00:15:22.582 "reset": true, 00:15:22.582 "nvme_admin": false, 00:15:22.582 "nvme_io": false, 00:15:22.582 "nvme_io_md": false, 00:15:22.582 "write_zeroes": true, 00:15:22.582 "zcopy": true, 00:15:22.582 "get_zone_info": false, 00:15:22.582 "zone_management": false, 00:15:22.582 "zone_append": false, 00:15:22.582 "compare": false, 00:15:22.582 "compare_and_write": false, 00:15:22.582 "abort": true, 00:15:22.582 "seek_hole": false, 00:15:22.582 "seek_data": false, 00:15:22.582 "copy": true, 00:15:22.582 "nvme_iov_md": false 00:15:22.582 }, 00:15:22.582 "memory_domains": [ 00:15:22.582 { 00:15:22.582 "dma_device_id": "system", 00:15:22.582 "dma_device_type": 1 00:15:22.582 }, 00:15:22.582 { 00:15:22.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.582 "dma_device_type": 2 00:15:22.582 } 00:15:22.582 ], 00:15:22.582 "driver_specific": {} 00:15:22.582 } 00:15:22.582 ] 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.582 "name": "Existed_Raid", 00:15:22.582 "uuid": "24a7a368-9c3d-47e1-860e-c9544034bab8", 00:15:22.582 "strip_size_kb": 64, 00:15:22.582 "state": "online", 00:15:22.582 "raid_level": "raid5f", 00:15:22.582 "superblock": true, 00:15:22.582 "num_base_bdevs": 3, 00:15:22.582 "num_base_bdevs_discovered": 3, 00:15:22.582 "num_base_bdevs_operational": 3, 00:15:22.582 "base_bdevs_list": [ 00:15:22.582 { 00:15:22.582 "name": "NewBaseBdev", 00:15:22.582 "uuid": "3795464d-df38-4aaa-b7eb-267f1ae75e50", 00:15:22.582 "is_configured": true, 00:15:22.582 "data_offset": 2048, 00:15:22.582 "data_size": 63488 00:15:22.582 }, 00:15:22.582 { 00:15:22.582 "name": "BaseBdev2", 00:15:22.582 "uuid": "f6fcbe81-e6fd-4557-9519-4082daab6d60", 00:15:22.582 "is_configured": true, 00:15:22.582 "data_offset": 2048, 00:15:22.582 "data_size": 63488 00:15:22.582 }, 00:15:22.582 { 00:15:22.582 "name": "BaseBdev3", 00:15:22.582 "uuid": "c52b951a-f239-4e87-ba1e-ccb57010f5ed", 00:15:22.582 "is_configured": true, 00:15:22.582 "data_offset": 2048, 00:15:22.582 "data_size": 63488 00:15:22.582 } 00:15:22.582 ] 00:15:22.582 }' 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.582 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.843 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:22.843 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:22.843 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:22.843 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:22.843 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:23.103 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:23.103 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:23.103 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:23.103 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.103 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.103 [2024-11-25 15:42:21.534796] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.103 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.103 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:23.103 "name": "Existed_Raid", 00:15:23.103 "aliases": [ 00:15:23.103 "24a7a368-9c3d-47e1-860e-c9544034bab8" 00:15:23.103 ], 00:15:23.103 "product_name": "Raid Volume", 00:15:23.103 "block_size": 512, 00:15:23.103 "num_blocks": 126976, 00:15:23.103 "uuid": "24a7a368-9c3d-47e1-860e-c9544034bab8", 00:15:23.103 "assigned_rate_limits": { 00:15:23.103 "rw_ios_per_sec": 0, 00:15:23.103 "rw_mbytes_per_sec": 0, 00:15:23.104 "r_mbytes_per_sec": 0, 00:15:23.104 "w_mbytes_per_sec": 0 00:15:23.104 }, 00:15:23.104 "claimed": false, 00:15:23.104 "zoned": false, 00:15:23.104 "supported_io_types": { 00:15:23.104 "read": true, 00:15:23.104 "write": true, 00:15:23.104 "unmap": false, 00:15:23.104 "flush": false, 00:15:23.104 "reset": true, 00:15:23.104 "nvme_admin": false, 00:15:23.104 "nvme_io": false, 00:15:23.104 "nvme_io_md": false, 00:15:23.104 "write_zeroes": true, 00:15:23.104 "zcopy": false, 00:15:23.104 "get_zone_info": false, 00:15:23.104 "zone_management": false, 00:15:23.104 "zone_append": false, 00:15:23.104 "compare": false, 00:15:23.104 "compare_and_write": false, 00:15:23.104 "abort": false, 00:15:23.104 "seek_hole": false, 00:15:23.104 "seek_data": false, 00:15:23.104 "copy": false, 00:15:23.104 "nvme_iov_md": false 00:15:23.104 }, 00:15:23.104 "driver_specific": { 00:15:23.104 "raid": { 00:15:23.104 "uuid": "24a7a368-9c3d-47e1-860e-c9544034bab8", 00:15:23.104 "strip_size_kb": 64, 00:15:23.104 "state": "online", 00:15:23.104 "raid_level": "raid5f", 00:15:23.104 "superblock": true, 00:15:23.104 "num_base_bdevs": 3, 00:15:23.104 "num_base_bdevs_discovered": 3, 00:15:23.104 "num_base_bdevs_operational": 3, 00:15:23.104 "base_bdevs_list": [ 00:15:23.104 { 00:15:23.104 "name": "NewBaseBdev", 00:15:23.104 "uuid": "3795464d-df38-4aaa-b7eb-267f1ae75e50", 00:15:23.104 "is_configured": true, 00:15:23.104 "data_offset": 2048, 00:15:23.104 "data_size": 63488 00:15:23.104 }, 00:15:23.104 { 00:15:23.104 "name": "BaseBdev2", 00:15:23.104 "uuid": "f6fcbe81-e6fd-4557-9519-4082daab6d60", 00:15:23.104 "is_configured": true, 00:15:23.104 "data_offset": 2048, 00:15:23.104 "data_size": 63488 00:15:23.104 }, 00:15:23.104 { 00:15:23.104 "name": "BaseBdev3", 00:15:23.104 "uuid": "c52b951a-f239-4e87-ba1e-ccb57010f5ed", 00:15:23.104 "is_configured": true, 00:15:23.104 "data_offset": 2048, 00:15:23.104 "data_size": 63488 00:15:23.104 } 00:15:23.104 ] 00:15:23.104 } 00:15:23.104 } 00:15:23.104 }' 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:23.104 BaseBdev2 00:15:23.104 BaseBdev3' 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.104 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.364 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.364 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.364 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:23.364 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.364 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.364 [2024-11-25 15:42:21.818108] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:23.364 [2024-11-25 15:42:21.818131] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:23.364 [2024-11-25 15:42:21.818199] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.364 [2024-11-25 15:42:21.818464] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.364 [2024-11-25 15:42:21.818477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:23.364 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.364 15:42:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80145 00:15:23.364 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80145 ']' 00:15:23.364 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80145 00:15:23.364 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:23.364 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.364 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80145 00:15:23.364 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:23.364 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:23.364 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80145' 00:15:23.364 killing process with pid 80145 00:15:23.364 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80145 00:15:23.364 [2024-11-25 15:42:21.863622] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:23.364 15:42:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80145 00:15:23.625 [2024-11-25 15:42:22.142459] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:24.566 ************************************ 00:15:24.566 END TEST raid5f_state_function_test_sb 00:15:24.566 ************************************ 00:15:24.566 15:42:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:24.566 00:15:24.566 real 0m10.343s 00:15:24.566 user 0m16.476s 00:15:24.566 sys 0m1.892s 00:15:24.566 15:42:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.566 15:42:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.566 15:42:23 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:24.566 15:42:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:24.566 15:42:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.566 15:42:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:24.566 ************************************ 00:15:24.566 START TEST raid5f_superblock_test 00:15:24.566 ************************************ 00:15:24.566 15:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:24.826 15:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:24.826 15:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:24.826 15:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:24.826 15:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:24.826 15:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:24.826 15:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:24.826 15:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:24.826 15:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:24.826 15:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:24.826 15:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:24.826 15:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:24.826 15:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:24.826 15:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:24.826 15:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:24.826 15:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:24.826 15:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:24.826 15:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80760 00:15:24.827 15:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:24.827 15:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80760 00:15:24.827 15:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 80760 ']' 00:15:24.827 15:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.827 15:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.827 15:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.827 15:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.827 15:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.827 [2024-11-25 15:42:23.333152] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:15:24.827 [2024-11-25 15:42:23.333284] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80760 ] 00:15:25.087 [2024-11-25 15:42:23.507365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.087 [2024-11-25 15:42:23.615802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.347 [2024-11-25 15:42:23.811295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.347 [2024-11-25 15:42:23.811407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.607 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.608 malloc1 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.608 [2024-11-25 15:42:24.184571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:25.608 [2024-11-25 15:42:24.184703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.608 [2024-11-25 15:42:24.184745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:25.608 [2024-11-25 15:42:24.184792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.608 [2024-11-25 15:42:24.186841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.608 [2024-11-25 15:42:24.186918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:25.608 pt1 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.608 malloc2 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.608 [2024-11-25 15:42:24.240589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:25.608 [2024-11-25 15:42:24.240640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.608 [2024-11-25 15:42:24.240660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:25.608 [2024-11-25 15:42:24.240669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.608 [2024-11-25 15:42:24.242609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.608 [2024-11-25 15:42:24.242644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:25.608 pt2 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.608 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.869 malloc3 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.869 [2024-11-25 15:42:24.323218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:25.869 [2024-11-25 15:42:24.323323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.869 [2024-11-25 15:42:24.323360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:25.869 [2024-11-25 15:42:24.323387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.869 [2024-11-25 15:42:24.325327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.869 [2024-11-25 15:42:24.325392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:25.869 pt3 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.869 [2024-11-25 15:42:24.335252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:25.869 [2024-11-25 15:42:24.336996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:25.869 [2024-11-25 15:42:24.337129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:25.869 [2024-11-25 15:42:24.337332] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:25.869 [2024-11-25 15:42:24.337386] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:25.869 [2024-11-25 15:42:24.337639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:25.869 [2024-11-25 15:42:24.343020] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:25.869 [2024-11-25 15:42:24.343083] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:25.869 [2024-11-25 15:42:24.343310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.869 "name": "raid_bdev1", 00:15:25.869 "uuid": "a7c6a60e-27c2-452c-b866-ed98644f88b7", 00:15:25.869 "strip_size_kb": 64, 00:15:25.869 "state": "online", 00:15:25.869 "raid_level": "raid5f", 00:15:25.869 "superblock": true, 00:15:25.869 "num_base_bdevs": 3, 00:15:25.869 "num_base_bdevs_discovered": 3, 00:15:25.869 "num_base_bdevs_operational": 3, 00:15:25.869 "base_bdevs_list": [ 00:15:25.869 { 00:15:25.869 "name": "pt1", 00:15:25.869 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:25.869 "is_configured": true, 00:15:25.869 "data_offset": 2048, 00:15:25.869 "data_size": 63488 00:15:25.869 }, 00:15:25.869 { 00:15:25.869 "name": "pt2", 00:15:25.869 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:25.869 "is_configured": true, 00:15:25.869 "data_offset": 2048, 00:15:25.869 "data_size": 63488 00:15:25.869 }, 00:15:25.869 { 00:15:25.869 "name": "pt3", 00:15:25.869 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:25.869 "is_configured": true, 00:15:25.869 "data_offset": 2048, 00:15:25.869 "data_size": 63488 00:15:25.869 } 00:15:25.869 ] 00:15:25.869 }' 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.869 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.130 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:26.130 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:26.130 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:26.130 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:26.130 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:26.130 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:26.130 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:26.130 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:26.130 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.130 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.130 [2024-11-25 15:42:24.772732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.130 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.130 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:26.130 "name": "raid_bdev1", 00:15:26.130 "aliases": [ 00:15:26.130 "a7c6a60e-27c2-452c-b866-ed98644f88b7" 00:15:26.130 ], 00:15:26.130 "product_name": "Raid Volume", 00:15:26.130 "block_size": 512, 00:15:26.130 "num_blocks": 126976, 00:15:26.130 "uuid": "a7c6a60e-27c2-452c-b866-ed98644f88b7", 00:15:26.130 "assigned_rate_limits": { 00:15:26.130 "rw_ios_per_sec": 0, 00:15:26.130 "rw_mbytes_per_sec": 0, 00:15:26.130 "r_mbytes_per_sec": 0, 00:15:26.130 "w_mbytes_per_sec": 0 00:15:26.130 }, 00:15:26.130 "claimed": false, 00:15:26.130 "zoned": false, 00:15:26.130 "supported_io_types": { 00:15:26.130 "read": true, 00:15:26.130 "write": true, 00:15:26.130 "unmap": false, 00:15:26.130 "flush": false, 00:15:26.130 "reset": true, 00:15:26.130 "nvme_admin": false, 00:15:26.130 "nvme_io": false, 00:15:26.130 "nvme_io_md": false, 00:15:26.130 "write_zeroes": true, 00:15:26.130 "zcopy": false, 00:15:26.130 "get_zone_info": false, 00:15:26.130 "zone_management": false, 00:15:26.130 "zone_append": false, 00:15:26.130 "compare": false, 00:15:26.130 "compare_and_write": false, 00:15:26.130 "abort": false, 00:15:26.130 "seek_hole": false, 00:15:26.130 "seek_data": false, 00:15:26.130 "copy": false, 00:15:26.130 "nvme_iov_md": false 00:15:26.130 }, 00:15:26.130 "driver_specific": { 00:15:26.130 "raid": { 00:15:26.130 "uuid": "a7c6a60e-27c2-452c-b866-ed98644f88b7", 00:15:26.130 "strip_size_kb": 64, 00:15:26.130 "state": "online", 00:15:26.130 "raid_level": "raid5f", 00:15:26.130 "superblock": true, 00:15:26.130 "num_base_bdevs": 3, 00:15:26.130 "num_base_bdevs_discovered": 3, 00:15:26.130 "num_base_bdevs_operational": 3, 00:15:26.130 "base_bdevs_list": [ 00:15:26.130 { 00:15:26.130 "name": "pt1", 00:15:26.130 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:26.130 "is_configured": true, 00:15:26.130 "data_offset": 2048, 00:15:26.130 "data_size": 63488 00:15:26.130 }, 00:15:26.130 { 00:15:26.130 "name": "pt2", 00:15:26.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:26.130 "is_configured": true, 00:15:26.130 "data_offset": 2048, 00:15:26.130 "data_size": 63488 00:15:26.130 }, 00:15:26.130 { 00:15:26.130 "name": "pt3", 00:15:26.130 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:26.130 "is_configured": true, 00:15:26.130 "data_offset": 2048, 00:15:26.130 "data_size": 63488 00:15:26.130 } 00:15:26.130 ] 00:15:26.130 } 00:15:26.130 } 00:15:26.130 }' 00:15:26.391 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:26.391 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:26.391 pt2 00:15:26.391 pt3' 00:15:26.391 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.391 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:26.391 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.391 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:26.391 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.391 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.391 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.391 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.391 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.391 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.391 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.391 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:26.391 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.391 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.391 15:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.391 15:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.391 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.391 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.391 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.391 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.391 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:26.391 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.391 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.391 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.391 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.391 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.391 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:26.391 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:26.391 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.391 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.652 [2024-11-25 15:42:25.072170] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a7c6a60e-27c2-452c-b866-ed98644f88b7 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a7c6a60e-27c2-452c-b866-ed98644f88b7 ']' 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.652 [2024-11-25 15:42:25.115921] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:26.652 [2024-11-25 15:42:25.115945] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.652 [2024-11-25 15:42:25.116027] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.652 [2024-11-25 15:42:25.116098] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.652 [2024-11-25 15:42:25.116108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:26.652 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.653 [2024-11-25 15:42:25.263711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:26.653 [2024-11-25 15:42:25.265496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:26.653 [2024-11-25 15:42:25.265547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:26.653 [2024-11-25 15:42:25.265594] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:26.653 [2024-11-25 15:42:25.265638] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:26.653 [2024-11-25 15:42:25.265656] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:26.653 [2024-11-25 15:42:25.265670] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:26.653 [2024-11-25 15:42:25.265678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:26.653 request: 00:15:26.653 { 00:15:26.653 "name": "raid_bdev1", 00:15:26.653 "raid_level": "raid5f", 00:15:26.653 "base_bdevs": [ 00:15:26.653 "malloc1", 00:15:26.653 "malloc2", 00:15:26.653 "malloc3" 00:15:26.653 ], 00:15:26.653 "strip_size_kb": 64, 00:15:26.653 "superblock": false, 00:15:26.653 "method": "bdev_raid_create", 00:15:26.653 "req_id": 1 00:15:26.653 } 00:15:26.653 Got JSON-RPC error response 00:15:26.653 response: 00:15:26.653 { 00:15:26.653 "code": -17, 00:15:26.653 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:26.653 } 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.653 [2024-11-25 15:42:25.323556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:26.653 [2024-11-25 15:42:25.323639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.653 [2024-11-25 15:42:25.323671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:26.653 [2024-11-25 15:42:25.323698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.653 [2024-11-25 15:42:25.325779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.653 [2024-11-25 15:42:25.325846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:26.653 [2024-11-25 15:42:25.325931] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:26.653 [2024-11-25 15:42:25.325990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:26.653 pt1 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.653 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.914 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.914 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.914 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.914 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.914 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.914 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.914 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.914 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.914 "name": "raid_bdev1", 00:15:26.914 "uuid": "a7c6a60e-27c2-452c-b866-ed98644f88b7", 00:15:26.914 "strip_size_kb": 64, 00:15:26.914 "state": "configuring", 00:15:26.914 "raid_level": "raid5f", 00:15:26.914 "superblock": true, 00:15:26.914 "num_base_bdevs": 3, 00:15:26.914 "num_base_bdevs_discovered": 1, 00:15:26.914 "num_base_bdevs_operational": 3, 00:15:26.914 "base_bdevs_list": [ 00:15:26.914 { 00:15:26.914 "name": "pt1", 00:15:26.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:26.914 "is_configured": true, 00:15:26.914 "data_offset": 2048, 00:15:26.914 "data_size": 63488 00:15:26.914 }, 00:15:26.914 { 00:15:26.914 "name": null, 00:15:26.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:26.914 "is_configured": false, 00:15:26.914 "data_offset": 2048, 00:15:26.914 "data_size": 63488 00:15:26.914 }, 00:15:26.914 { 00:15:26.914 "name": null, 00:15:26.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:26.914 "is_configured": false, 00:15:26.914 "data_offset": 2048, 00:15:26.914 "data_size": 63488 00:15:26.914 } 00:15:26.914 ] 00:15:26.914 }' 00:15:26.914 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.914 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.175 [2024-11-25 15:42:25.798798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:27.175 [2024-11-25 15:42:25.798859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.175 [2024-11-25 15:42:25.798879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:27.175 [2024-11-25 15:42:25.798888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.175 [2024-11-25 15:42:25.799310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.175 [2024-11-25 15:42:25.799334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:27.175 [2024-11-25 15:42:25.799417] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:27.175 [2024-11-25 15:42:25.799439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:27.175 pt2 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.175 [2024-11-25 15:42:25.806794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.175 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.435 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.435 "name": "raid_bdev1", 00:15:27.435 "uuid": "a7c6a60e-27c2-452c-b866-ed98644f88b7", 00:15:27.435 "strip_size_kb": 64, 00:15:27.435 "state": "configuring", 00:15:27.435 "raid_level": "raid5f", 00:15:27.435 "superblock": true, 00:15:27.435 "num_base_bdevs": 3, 00:15:27.435 "num_base_bdevs_discovered": 1, 00:15:27.435 "num_base_bdevs_operational": 3, 00:15:27.435 "base_bdevs_list": [ 00:15:27.435 { 00:15:27.435 "name": "pt1", 00:15:27.435 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:27.435 "is_configured": true, 00:15:27.435 "data_offset": 2048, 00:15:27.435 "data_size": 63488 00:15:27.435 }, 00:15:27.435 { 00:15:27.435 "name": null, 00:15:27.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.435 "is_configured": false, 00:15:27.435 "data_offset": 0, 00:15:27.435 "data_size": 63488 00:15:27.435 }, 00:15:27.435 { 00:15:27.435 "name": null, 00:15:27.435 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:27.435 "is_configured": false, 00:15:27.435 "data_offset": 2048, 00:15:27.435 "data_size": 63488 00:15:27.435 } 00:15:27.435 ] 00:15:27.435 }' 00:15:27.435 15:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.435 15:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.695 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:27.695 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:27.695 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:27.695 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.695 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.696 [2024-11-25 15:42:26.301900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:27.696 [2024-11-25 15:42:26.302000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.696 [2024-11-25 15:42:26.302041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:27.696 [2024-11-25 15:42:26.302071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.696 [2024-11-25 15:42:26.302496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.696 [2024-11-25 15:42:26.302555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:27.696 [2024-11-25 15:42:26.302653] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:27.696 [2024-11-25 15:42:26.302704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:27.696 pt2 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.696 [2024-11-25 15:42:26.313874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:27.696 [2024-11-25 15:42:26.313952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.696 [2024-11-25 15:42:26.313979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:27.696 [2024-11-25 15:42:26.314013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.696 [2024-11-25 15:42:26.314392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.696 [2024-11-25 15:42:26.314449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:27.696 [2024-11-25 15:42:26.314528] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:27.696 [2024-11-25 15:42:26.314574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:27.696 [2024-11-25 15:42:26.314714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:27.696 [2024-11-25 15:42:26.314751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:27.696 [2024-11-25 15:42:26.314999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:27.696 [2024-11-25 15:42:26.320147] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:27.696 [2024-11-25 15:42:26.320200] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:27.696 [2024-11-25 15:42:26.320402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.696 pt3 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.696 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.956 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.956 "name": "raid_bdev1", 00:15:27.956 "uuid": "a7c6a60e-27c2-452c-b866-ed98644f88b7", 00:15:27.956 "strip_size_kb": 64, 00:15:27.956 "state": "online", 00:15:27.956 "raid_level": "raid5f", 00:15:27.956 "superblock": true, 00:15:27.956 "num_base_bdevs": 3, 00:15:27.956 "num_base_bdevs_discovered": 3, 00:15:27.956 "num_base_bdevs_operational": 3, 00:15:27.956 "base_bdevs_list": [ 00:15:27.956 { 00:15:27.956 "name": "pt1", 00:15:27.956 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:27.956 "is_configured": true, 00:15:27.956 "data_offset": 2048, 00:15:27.956 "data_size": 63488 00:15:27.956 }, 00:15:27.956 { 00:15:27.956 "name": "pt2", 00:15:27.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.956 "is_configured": true, 00:15:27.956 "data_offset": 2048, 00:15:27.956 "data_size": 63488 00:15:27.956 }, 00:15:27.956 { 00:15:27.956 "name": "pt3", 00:15:27.956 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:27.956 "is_configured": true, 00:15:27.956 "data_offset": 2048, 00:15:27.956 "data_size": 63488 00:15:27.956 } 00:15:27.956 ] 00:15:27.956 }' 00:15:27.956 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.956 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.216 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:28.216 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:28.216 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:28.216 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:28.216 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:28.216 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:28.216 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:28.216 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:28.216 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.216 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.216 [2024-11-25 15:42:26.781878] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.216 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.216 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:28.216 "name": "raid_bdev1", 00:15:28.216 "aliases": [ 00:15:28.216 "a7c6a60e-27c2-452c-b866-ed98644f88b7" 00:15:28.216 ], 00:15:28.216 "product_name": "Raid Volume", 00:15:28.216 "block_size": 512, 00:15:28.216 "num_blocks": 126976, 00:15:28.216 "uuid": "a7c6a60e-27c2-452c-b866-ed98644f88b7", 00:15:28.216 "assigned_rate_limits": { 00:15:28.216 "rw_ios_per_sec": 0, 00:15:28.216 "rw_mbytes_per_sec": 0, 00:15:28.216 "r_mbytes_per_sec": 0, 00:15:28.216 "w_mbytes_per_sec": 0 00:15:28.216 }, 00:15:28.216 "claimed": false, 00:15:28.216 "zoned": false, 00:15:28.216 "supported_io_types": { 00:15:28.216 "read": true, 00:15:28.216 "write": true, 00:15:28.216 "unmap": false, 00:15:28.216 "flush": false, 00:15:28.216 "reset": true, 00:15:28.216 "nvme_admin": false, 00:15:28.216 "nvme_io": false, 00:15:28.216 "nvme_io_md": false, 00:15:28.216 "write_zeroes": true, 00:15:28.216 "zcopy": false, 00:15:28.216 "get_zone_info": false, 00:15:28.216 "zone_management": false, 00:15:28.216 "zone_append": false, 00:15:28.216 "compare": false, 00:15:28.216 "compare_and_write": false, 00:15:28.216 "abort": false, 00:15:28.216 "seek_hole": false, 00:15:28.216 "seek_data": false, 00:15:28.216 "copy": false, 00:15:28.216 "nvme_iov_md": false 00:15:28.216 }, 00:15:28.216 "driver_specific": { 00:15:28.216 "raid": { 00:15:28.216 "uuid": "a7c6a60e-27c2-452c-b866-ed98644f88b7", 00:15:28.216 "strip_size_kb": 64, 00:15:28.216 "state": "online", 00:15:28.217 "raid_level": "raid5f", 00:15:28.217 "superblock": true, 00:15:28.217 "num_base_bdevs": 3, 00:15:28.217 "num_base_bdevs_discovered": 3, 00:15:28.217 "num_base_bdevs_operational": 3, 00:15:28.217 "base_bdevs_list": [ 00:15:28.217 { 00:15:28.217 "name": "pt1", 00:15:28.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:28.217 "is_configured": true, 00:15:28.217 "data_offset": 2048, 00:15:28.217 "data_size": 63488 00:15:28.217 }, 00:15:28.217 { 00:15:28.217 "name": "pt2", 00:15:28.217 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.217 "is_configured": true, 00:15:28.217 "data_offset": 2048, 00:15:28.217 "data_size": 63488 00:15:28.217 }, 00:15:28.217 { 00:15:28.217 "name": "pt3", 00:15:28.217 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:28.217 "is_configured": true, 00:15:28.217 "data_offset": 2048, 00:15:28.217 "data_size": 63488 00:15:28.217 } 00:15:28.217 ] 00:15:28.217 } 00:15:28.217 } 00:15:28.217 }' 00:15:28.217 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:28.217 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:28.217 pt2 00:15:28.217 pt3' 00:15:28.217 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.217 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:28.217 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.217 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:28.217 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.217 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.217 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.217 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.477 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.477 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.477 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.477 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:28.477 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.477 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.477 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.477 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.477 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.477 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.477 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.477 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.477 15:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:28.477 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.477 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.477 15:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.477 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.477 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.477 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:28.477 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:28.477 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.477 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.477 [2024-11-25 15:42:27.021410] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.477 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.477 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a7c6a60e-27c2-452c-b866-ed98644f88b7 '!=' a7c6a60e-27c2-452c-b866-ed98644f88b7 ']' 00:15:28.477 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:28.477 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:28.477 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:28.477 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:28.477 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.477 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.477 [2024-11-25 15:42:27.069214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:28.477 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.477 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:28.478 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.478 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.478 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.478 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.478 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.478 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.478 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.478 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.478 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.478 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.478 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.478 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.478 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.478 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.478 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.478 "name": "raid_bdev1", 00:15:28.478 "uuid": "a7c6a60e-27c2-452c-b866-ed98644f88b7", 00:15:28.478 "strip_size_kb": 64, 00:15:28.478 "state": "online", 00:15:28.478 "raid_level": "raid5f", 00:15:28.478 "superblock": true, 00:15:28.478 "num_base_bdevs": 3, 00:15:28.478 "num_base_bdevs_discovered": 2, 00:15:28.478 "num_base_bdevs_operational": 2, 00:15:28.478 "base_bdevs_list": [ 00:15:28.478 { 00:15:28.478 "name": null, 00:15:28.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.478 "is_configured": false, 00:15:28.478 "data_offset": 0, 00:15:28.478 "data_size": 63488 00:15:28.478 }, 00:15:28.478 { 00:15:28.478 "name": "pt2", 00:15:28.478 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.478 "is_configured": true, 00:15:28.478 "data_offset": 2048, 00:15:28.478 "data_size": 63488 00:15:28.478 }, 00:15:28.478 { 00:15:28.478 "name": "pt3", 00:15:28.478 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:28.478 "is_configured": true, 00:15:28.478 "data_offset": 2048, 00:15:28.478 "data_size": 63488 00:15:28.478 } 00:15:28.478 ] 00:15:28.478 }' 00:15:28.478 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.478 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.048 [2024-11-25 15:42:27.520401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.048 [2024-11-25 15:42:27.520470] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.048 [2024-11-25 15:42:27.520553] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.048 [2024-11-25 15:42:27.520620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.048 [2024-11-25 15:42:27.520682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.048 [2024-11-25 15:42:27.608231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:29.048 [2024-11-25 15:42:27.608278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.048 [2024-11-25 15:42:27.608308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:29.048 [2024-11-25 15:42:27.608318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.048 [2024-11-25 15:42:27.610343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.048 [2024-11-25 15:42:27.610379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:29.048 [2024-11-25 15:42:27.610452] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:29.048 [2024-11-25 15:42:27.610496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:29.048 pt2 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.048 "name": "raid_bdev1", 00:15:29.048 "uuid": "a7c6a60e-27c2-452c-b866-ed98644f88b7", 00:15:29.048 "strip_size_kb": 64, 00:15:29.048 "state": "configuring", 00:15:29.048 "raid_level": "raid5f", 00:15:29.048 "superblock": true, 00:15:29.048 "num_base_bdevs": 3, 00:15:29.048 "num_base_bdevs_discovered": 1, 00:15:29.048 "num_base_bdevs_operational": 2, 00:15:29.048 "base_bdevs_list": [ 00:15:29.048 { 00:15:29.048 "name": null, 00:15:29.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.048 "is_configured": false, 00:15:29.048 "data_offset": 2048, 00:15:29.048 "data_size": 63488 00:15:29.048 }, 00:15:29.048 { 00:15:29.048 "name": "pt2", 00:15:29.048 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.048 "is_configured": true, 00:15:29.048 "data_offset": 2048, 00:15:29.048 "data_size": 63488 00:15:29.048 }, 00:15:29.048 { 00:15:29.048 "name": null, 00:15:29.048 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:29.048 "is_configured": false, 00:15:29.048 "data_offset": 2048, 00:15:29.048 "data_size": 63488 00:15:29.048 } 00:15:29.048 ] 00:15:29.048 }' 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.048 15:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.618 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:29.618 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:29.618 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:29.618 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:29.618 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.619 [2024-11-25 15:42:28.015541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:29.619 [2024-11-25 15:42:28.015602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.619 [2024-11-25 15:42:28.015622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:29.619 [2024-11-25 15:42:28.015633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.619 [2024-11-25 15:42:28.016058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.619 [2024-11-25 15:42:28.016090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:29.619 [2024-11-25 15:42:28.016166] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:29.619 [2024-11-25 15:42:28.016197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:29.619 [2024-11-25 15:42:28.016324] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:29.619 [2024-11-25 15:42:28.016341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:29.619 [2024-11-25 15:42:28.016575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:29.619 pt3 00:15:29.619 [2024-11-25 15:42:28.021712] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:29.619 [2024-11-25 15:42:28.021731] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:29.619 [2024-11-25 15:42:28.022003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.619 "name": "raid_bdev1", 00:15:29.619 "uuid": "a7c6a60e-27c2-452c-b866-ed98644f88b7", 00:15:29.619 "strip_size_kb": 64, 00:15:29.619 "state": "online", 00:15:29.619 "raid_level": "raid5f", 00:15:29.619 "superblock": true, 00:15:29.619 "num_base_bdevs": 3, 00:15:29.619 "num_base_bdevs_discovered": 2, 00:15:29.619 "num_base_bdevs_operational": 2, 00:15:29.619 "base_bdevs_list": [ 00:15:29.619 { 00:15:29.619 "name": null, 00:15:29.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.619 "is_configured": false, 00:15:29.619 "data_offset": 2048, 00:15:29.619 "data_size": 63488 00:15:29.619 }, 00:15:29.619 { 00:15:29.619 "name": "pt2", 00:15:29.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.619 "is_configured": true, 00:15:29.619 "data_offset": 2048, 00:15:29.619 "data_size": 63488 00:15:29.619 }, 00:15:29.619 { 00:15:29.619 "name": "pt3", 00:15:29.619 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:29.619 "is_configured": true, 00:15:29.619 "data_offset": 2048, 00:15:29.619 "data_size": 63488 00:15:29.619 } 00:15:29.619 ] 00:15:29.619 }' 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.619 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.881 [2024-11-25 15:42:28.447926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.881 [2024-11-25 15:42:28.447998] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.881 [2024-11-25 15:42:28.448095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.881 [2024-11-25 15:42:28.448171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.881 [2024-11-25 15:42:28.448231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.881 [2024-11-25 15:42:28.503873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:29.881 [2024-11-25 15:42:28.503964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.881 [2024-11-25 15:42:28.503998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:29.881 [2024-11-25 15:42:28.504043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.881 [2024-11-25 15:42:28.506189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.881 [2024-11-25 15:42:28.506250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:29.881 [2024-11-25 15:42:28.506343] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:29.881 [2024-11-25 15:42:28.506401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:29.881 [2024-11-25 15:42:28.506522] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:29.881 [2024-11-25 15:42:28.506532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.881 [2024-11-25 15:42:28.506546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:29.881 [2024-11-25 15:42:28.506602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:29.881 pt1 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.881 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.881 "name": "raid_bdev1", 00:15:29.881 "uuid": "a7c6a60e-27c2-452c-b866-ed98644f88b7", 00:15:29.881 "strip_size_kb": 64, 00:15:29.881 "state": "configuring", 00:15:29.881 "raid_level": "raid5f", 00:15:29.881 "superblock": true, 00:15:29.881 "num_base_bdevs": 3, 00:15:29.882 "num_base_bdevs_discovered": 1, 00:15:29.882 "num_base_bdevs_operational": 2, 00:15:29.882 "base_bdevs_list": [ 00:15:29.882 { 00:15:29.882 "name": null, 00:15:29.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.882 "is_configured": false, 00:15:29.882 "data_offset": 2048, 00:15:29.882 "data_size": 63488 00:15:29.882 }, 00:15:29.882 { 00:15:29.882 "name": "pt2", 00:15:29.882 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.882 "is_configured": true, 00:15:29.882 "data_offset": 2048, 00:15:29.882 "data_size": 63488 00:15:29.882 }, 00:15:29.882 { 00:15:29.882 "name": null, 00:15:29.882 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:29.882 "is_configured": false, 00:15:29.882 "data_offset": 2048, 00:15:29.882 "data_size": 63488 00:15:29.882 } 00:15:29.882 ] 00:15:29.882 }' 00:15:29.882 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.882 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.452 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:30.452 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:30.452 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.452 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.452 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.452 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:30.452 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:30.452 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.452 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.452 [2024-11-25 15:42:28.971260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:30.452 [2024-11-25 15:42:28.971353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.452 [2024-11-25 15:42:28.971388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:30.452 [2024-11-25 15:42:28.971416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.452 [2024-11-25 15:42:28.971847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.452 [2024-11-25 15:42:28.971903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:30.452 [2024-11-25 15:42:28.972000] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:30.452 [2024-11-25 15:42:28.972062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:30.452 [2024-11-25 15:42:28.972219] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:30.452 [2024-11-25 15:42:28.972256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:30.452 [2024-11-25 15:42:28.972511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:30.452 [2024-11-25 15:42:28.978233] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:30.453 [2024-11-25 15:42:28.978302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:30.453 [2024-11-25 15:42:28.978580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.453 pt3 00:15:30.453 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.453 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:30.453 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.453 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.453 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.453 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.453 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:30.453 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.453 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.453 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.453 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.453 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.453 15:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.453 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.453 15:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.453 15:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.453 15:42:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.453 "name": "raid_bdev1", 00:15:30.453 "uuid": "a7c6a60e-27c2-452c-b866-ed98644f88b7", 00:15:30.453 "strip_size_kb": 64, 00:15:30.453 "state": "online", 00:15:30.453 "raid_level": "raid5f", 00:15:30.453 "superblock": true, 00:15:30.453 "num_base_bdevs": 3, 00:15:30.453 "num_base_bdevs_discovered": 2, 00:15:30.453 "num_base_bdevs_operational": 2, 00:15:30.453 "base_bdevs_list": [ 00:15:30.453 { 00:15:30.453 "name": null, 00:15:30.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.453 "is_configured": false, 00:15:30.453 "data_offset": 2048, 00:15:30.453 "data_size": 63488 00:15:30.453 }, 00:15:30.453 { 00:15:30.453 "name": "pt2", 00:15:30.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:30.453 "is_configured": true, 00:15:30.453 "data_offset": 2048, 00:15:30.453 "data_size": 63488 00:15:30.453 }, 00:15:30.453 { 00:15:30.453 "name": "pt3", 00:15:30.453 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:30.453 "is_configured": true, 00:15:30.453 "data_offset": 2048, 00:15:30.453 "data_size": 63488 00:15:30.453 } 00:15:30.453 ] 00:15:30.453 }' 00:15:30.453 15:42:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.453 15:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.712 15:42:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:30.712 15:42:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:30.712 15:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.712 15:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.712 15:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.972 15:42:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:30.972 15:42:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:30.972 15:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.972 15:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.972 15:42:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:30.972 [2024-11-25 15:42:29.404300] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:30.972 15:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.972 15:42:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a7c6a60e-27c2-452c-b866-ed98644f88b7 '!=' a7c6a60e-27c2-452c-b866-ed98644f88b7 ']' 00:15:30.972 15:42:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80760 00:15:30.972 15:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 80760 ']' 00:15:30.972 15:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 80760 00:15:30.972 15:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:30.972 15:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.972 15:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80760 00:15:30.972 15:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:30.972 killing process with pid 80760 00:15:30.972 15:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:30.972 15:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80760' 00:15:30.972 15:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 80760 00:15:30.972 [2024-11-25 15:42:29.479693] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:30.972 [2024-11-25 15:42:29.479775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.972 [2024-11-25 15:42:29.479833] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.972 [2024-11-25 15:42:29.479845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:30.973 15:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 80760 00:15:31.232 [2024-11-25 15:42:29.753637] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.173 15:42:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:32.173 00:15:32.173 real 0m7.546s 00:15:32.173 user 0m11.878s 00:15:32.173 sys 0m1.314s 00:15:32.173 15:42:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.173 15:42:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.173 ************************************ 00:15:32.173 END TEST raid5f_superblock_test 00:15:32.173 ************************************ 00:15:32.173 15:42:30 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:32.433 15:42:30 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:32.433 15:42:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:32.433 15:42:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.433 15:42:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.433 ************************************ 00:15:32.433 START TEST raid5f_rebuild_test 00:15:32.433 ************************************ 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81198 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81198 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81198 ']' 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.433 15:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.433 [2024-11-25 15:42:30.963381] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:15:32.433 [2024-11-25 15:42:30.963584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:32.433 Zero copy mechanism will not be used. 00:15:32.433 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81198 ] 00:15:32.693 [2024-11-25 15:42:31.134957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.693 [2024-11-25 15:42:31.240680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.952 [2024-11-25 15:42:31.428077] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.952 [2024-11-25 15:42:31.428161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.212 BaseBdev1_malloc 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.212 [2024-11-25 15:42:31.818823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:33.212 [2024-11-25 15:42:31.818902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.212 [2024-11-25 15:42:31.818925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:33.212 [2024-11-25 15:42:31.818935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.212 [2024-11-25 15:42:31.820943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.212 [2024-11-25 15:42:31.820983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:33.212 BaseBdev1 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.212 BaseBdev2_malloc 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.212 [2024-11-25 15:42:31.874616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:33.212 [2024-11-25 15:42:31.874727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.212 [2024-11-25 15:42:31.874747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:33.212 [2024-11-25 15:42:31.874758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.212 [2024-11-25 15:42:31.876742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.212 [2024-11-25 15:42:31.876782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:33.212 BaseBdev2 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.212 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.472 BaseBdev3_malloc 00:15:33.472 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.472 15:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:33.472 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.472 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.472 [2024-11-25 15:42:31.958458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:33.472 [2024-11-25 15:42:31.958506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.472 [2024-11-25 15:42:31.958541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:33.472 [2024-11-25 15:42:31.958552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.472 [2024-11-25 15:42:31.960486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.472 [2024-11-25 15:42:31.960526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:33.472 BaseBdev3 00:15:33.472 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.472 15:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:33.472 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.472 15:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.472 spare_malloc 00:15:33.472 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.472 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:33.472 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.472 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.472 spare_delay 00:15:33.472 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.472 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:33.472 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.472 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.472 [2024-11-25 15:42:32.023685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:33.472 [2024-11-25 15:42:32.023731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.472 [2024-11-25 15:42:32.023762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:33.472 [2024-11-25 15:42:32.023772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.472 [2024-11-25 15:42:32.025765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.472 [2024-11-25 15:42:32.025819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:33.472 spare 00:15:33.472 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.472 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:33.472 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.472 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.472 [2024-11-25 15:42:32.035720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.472 [2024-11-25 15:42:32.037529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.472 [2024-11-25 15:42:32.037583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:33.472 [2024-11-25 15:42:32.037655] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:33.472 [2024-11-25 15:42:32.037665] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:33.472 [2024-11-25 15:42:32.037890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:33.473 [2024-11-25 15:42:32.043084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:33.473 [2024-11-25 15:42:32.043112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:33.473 [2024-11-25 15:42:32.043279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.473 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.473 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:33.473 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.473 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.473 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.473 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.473 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.473 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.473 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.473 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.473 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.473 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.473 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.473 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.473 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.473 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.473 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.473 "name": "raid_bdev1", 00:15:33.473 "uuid": "ea1bf377-6261-44b0-9bf0-12e66102d33b", 00:15:33.473 "strip_size_kb": 64, 00:15:33.473 "state": "online", 00:15:33.473 "raid_level": "raid5f", 00:15:33.473 "superblock": false, 00:15:33.473 "num_base_bdevs": 3, 00:15:33.473 "num_base_bdevs_discovered": 3, 00:15:33.473 "num_base_bdevs_operational": 3, 00:15:33.473 "base_bdevs_list": [ 00:15:33.473 { 00:15:33.473 "name": "BaseBdev1", 00:15:33.473 "uuid": "2ca1b68c-d665-5ed3-8ca2-d4362b213082", 00:15:33.473 "is_configured": true, 00:15:33.473 "data_offset": 0, 00:15:33.473 "data_size": 65536 00:15:33.473 }, 00:15:33.473 { 00:15:33.473 "name": "BaseBdev2", 00:15:33.473 "uuid": "f10bf066-3d69-5280-b56d-56eff36735f7", 00:15:33.473 "is_configured": true, 00:15:33.473 "data_offset": 0, 00:15:33.473 "data_size": 65536 00:15:33.473 }, 00:15:33.473 { 00:15:33.473 "name": "BaseBdev3", 00:15:33.473 "uuid": "36ed23ae-0c88-5a94-bcb9-b7bf12663d12", 00:15:33.473 "is_configured": true, 00:15:33.473 "data_offset": 0, 00:15:33.473 "data_size": 65536 00:15:33.473 } 00:15:33.473 ] 00:15:33.473 }' 00:15:33.473 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.473 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.041 [2024-11-25 15:42:32.529113] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:34.041 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:34.301 [2024-11-25 15:42:32.752576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:34.301 /dev/nbd0 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.301 1+0 records in 00:15:34.301 1+0 records out 00:15:34.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038142 s, 10.7 MB/s 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:34.301 15:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:34.561 512+0 records in 00:15:34.561 512+0 records out 00:15:34.561 67108864 bytes (67 MB, 64 MiB) copied, 0.370282 s, 181 MB/s 00:15:34.561 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:34.561 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:34.561 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:34.561 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:34.561 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:34.561 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.561 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:34.820 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:34.820 [2024-11-25 15:42:33.415156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.820 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:34.820 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:34.820 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.820 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.820 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:34.820 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:34.820 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.821 [2024-11-25 15:42:33.434675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.821 "name": "raid_bdev1", 00:15:34.821 "uuid": "ea1bf377-6261-44b0-9bf0-12e66102d33b", 00:15:34.821 "strip_size_kb": 64, 00:15:34.821 "state": "online", 00:15:34.821 "raid_level": "raid5f", 00:15:34.821 "superblock": false, 00:15:34.821 "num_base_bdevs": 3, 00:15:34.821 "num_base_bdevs_discovered": 2, 00:15:34.821 "num_base_bdevs_operational": 2, 00:15:34.821 "base_bdevs_list": [ 00:15:34.821 { 00:15:34.821 "name": null, 00:15:34.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.821 "is_configured": false, 00:15:34.821 "data_offset": 0, 00:15:34.821 "data_size": 65536 00:15:34.821 }, 00:15:34.821 { 00:15:34.821 "name": "BaseBdev2", 00:15:34.821 "uuid": "f10bf066-3d69-5280-b56d-56eff36735f7", 00:15:34.821 "is_configured": true, 00:15:34.821 "data_offset": 0, 00:15:34.821 "data_size": 65536 00:15:34.821 }, 00:15:34.821 { 00:15:34.821 "name": "BaseBdev3", 00:15:34.821 "uuid": "36ed23ae-0c88-5a94-bcb9-b7bf12663d12", 00:15:34.821 "is_configured": true, 00:15:34.821 "data_offset": 0, 00:15:34.821 "data_size": 65536 00:15:34.821 } 00:15:34.821 ] 00:15:34.821 }' 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.821 15:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.390 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:35.390 15:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.390 15:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.390 [2024-11-25 15:42:33.909860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.390 [2024-11-25 15:42:33.925947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:35.390 15:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.390 15:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:35.390 [2024-11-25 15:42:33.933560] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:36.330 15:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.330 15:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.330 15:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.330 15:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.330 15:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.330 15:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.330 15:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.330 15:42:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.330 15:42:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.330 15:42:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.330 15:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.330 "name": "raid_bdev1", 00:15:36.330 "uuid": "ea1bf377-6261-44b0-9bf0-12e66102d33b", 00:15:36.330 "strip_size_kb": 64, 00:15:36.330 "state": "online", 00:15:36.330 "raid_level": "raid5f", 00:15:36.330 "superblock": false, 00:15:36.330 "num_base_bdevs": 3, 00:15:36.330 "num_base_bdevs_discovered": 3, 00:15:36.330 "num_base_bdevs_operational": 3, 00:15:36.330 "process": { 00:15:36.330 "type": "rebuild", 00:15:36.330 "target": "spare", 00:15:36.330 "progress": { 00:15:36.330 "blocks": 20480, 00:15:36.330 "percent": 15 00:15:36.330 } 00:15:36.330 }, 00:15:36.330 "base_bdevs_list": [ 00:15:36.330 { 00:15:36.330 "name": "spare", 00:15:36.330 "uuid": "96910e7a-5f8e-5471-b7b0-3f347ed7dad7", 00:15:36.330 "is_configured": true, 00:15:36.330 "data_offset": 0, 00:15:36.330 "data_size": 65536 00:15:36.330 }, 00:15:36.330 { 00:15:36.330 "name": "BaseBdev2", 00:15:36.330 "uuid": "f10bf066-3d69-5280-b56d-56eff36735f7", 00:15:36.330 "is_configured": true, 00:15:36.330 "data_offset": 0, 00:15:36.330 "data_size": 65536 00:15:36.330 }, 00:15:36.330 { 00:15:36.330 "name": "BaseBdev3", 00:15:36.330 "uuid": "36ed23ae-0c88-5a94-bcb9-b7bf12663d12", 00:15:36.330 "is_configured": true, 00:15:36.330 "data_offset": 0, 00:15:36.330 "data_size": 65536 00:15:36.330 } 00:15:36.330 ] 00:15:36.330 }' 00:15:36.330 15:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.590 [2024-11-25 15:42:35.088348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:36.590 [2024-11-25 15:42:35.141197] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:36.590 [2024-11-25 15:42:35.141251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.590 [2024-11-25 15:42:35.141269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:36.590 [2024-11-25 15:42:35.141277] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.590 "name": "raid_bdev1", 00:15:36.590 "uuid": "ea1bf377-6261-44b0-9bf0-12e66102d33b", 00:15:36.590 "strip_size_kb": 64, 00:15:36.590 "state": "online", 00:15:36.590 "raid_level": "raid5f", 00:15:36.590 "superblock": false, 00:15:36.590 "num_base_bdevs": 3, 00:15:36.590 "num_base_bdevs_discovered": 2, 00:15:36.590 "num_base_bdevs_operational": 2, 00:15:36.590 "base_bdevs_list": [ 00:15:36.590 { 00:15:36.590 "name": null, 00:15:36.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.590 "is_configured": false, 00:15:36.590 "data_offset": 0, 00:15:36.590 "data_size": 65536 00:15:36.590 }, 00:15:36.590 { 00:15:36.590 "name": "BaseBdev2", 00:15:36.590 "uuid": "f10bf066-3d69-5280-b56d-56eff36735f7", 00:15:36.590 "is_configured": true, 00:15:36.590 "data_offset": 0, 00:15:36.590 "data_size": 65536 00:15:36.590 }, 00:15:36.590 { 00:15:36.590 "name": "BaseBdev3", 00:15:36.590 "uuid": "36ed23ae-0c88-5a94-bcb9-b7bf12663d12", 00:15:36.590 "is_configured": true, 00:15:36.590 "data_offset": 0, 00:15:36.590 "data_size": 65536 00:15:36.590 } 00:15:36.590 ] 00:15:36.590 }' 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.590 15:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.161 "name": "raid_bdev1", 00:15:37.161 "uuid": "ea1bf377-6261-44b0-9bf0-12e66102d33b", 00:15:37.161 "strip_size_kb": 64, 00:15:37.161 "state": "online", 00:15:37.161 "raid_level": "raid5f", 00:15:37.161 "superblock": false, 00:15:37.161 "num_base_bdevs": 3, 00:15:37.161 "num_base_bdevs_discovered": 2, 00:15:37.161 "num_base_bdevs_operational": 2, 00:15:37.161 "base_bdevs_list": [ 00:15:37.161 { 00:15:37.161 "name": null, 00:15:37.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.161 "is_configured": false, 00:15:37.161 "data_offset": 0, 00:15:37.161 "data_size": 65536 00:15:37.161 }, 00:15:37.161 { 00:15:37.161 "name": "BaseBdev2", 00:15:37.161 "uuid": "f10bf066-3d69-5280-b56d-56eff36735f7", 00:15:37.161 "is_configured": true, 00:15:37.161 "data_offset": 0, 00:15:37.161 "data_size": 65536 00:15:37.161 }, 00:15:37.161 { 00:15:37.161 "name": "BaseBdev3", 00:15:37.161 "uuid": "36ed23ae-0c88-5a94-bcb9-b7bf12663d12", 00:15:37.161 "is_configured": true, 00:15:37.161 "data_offset": 0, 00:15:37.161 "data_size": 65536 00:15:37.161 } 00:15:37.161 ] 00:15:37.161 }' 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.161 [2024-11-25 15:42:35.726748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:37.161 [2024-11-25 15:42:35.742991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.161 15:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:37.161 [2024-11-25 15:42:35.750808] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:38.100 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.100 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.100 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.100 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.100 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.100 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.100 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.100 15:42:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.100 15:42:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.100 15:42:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.360 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.360 "name": "raid_bdev1", 00:15:38.360 "uuid": "ea1bf377-6261-44b0-9bf0-12e66102d33b", 00:15:38.360 "strip_size_kb": 64, 00:15:38.360 "state": "online", 00:15:38.360 "raid_level": "raid5f", 00:15:38.360 "superblock": false, 00:15:38.360 "num_base_bdevs": 3, 00:15:38.360 "num_base_bdevs_discovered": 3, 00:15:38.360 "num_base_bdevs_operational": 3, 00:15:38.360 "process": { 00:15:38.360 "type": "rebuild", 00:15:38.360 "target": "spare", 00:15:38.360 "progress": { 00:15:38.360 "blocks": 20480, 00:15:38.360 "percent": 15 00:15:38.360 } 00:15:38.360 }, 00:15:38.360 "base_bdevs_list": [ 00:15:38.360 { 00:15:38.360 "name": "spare", 00:15:38.360 "uuid": "96910e7a-5f8e-5471-b7b0-3f347ed7dad7", 00:15:38.360 "is_configured": true, 00:15:38.360 "data_offset": 0, 00:15:38.360 "data_size": 65536 00:15:38.360 }, 00:15:38.360 { 00:15:38.360 "name": "BaseBdev2", 00:15:38.360 "uuid": "f10bf066-3d69-5280-b56d-56eff36735f7", 00:15:38.360 "is_configured": true, 00:15:38.360 "data_offset": 0, 00:15:38.360 "data_size": 65536 00:15:38.360 }, 00:15:38.360 { 00:15:38.360 "name": "BaseBdev3", 00:15:38.360 "uuid": "36ed23ae-0c88-5a94-bcb9-b7bf12663d12", 00:15:38.360 "is_configured": true, 00:15:38.360 "data_offset": 0, 00:15:38.360 "data_size": 65536 00:15:38.360 } 00:15:38.360 ] 00:15:38.360 }' 00:15:38.360 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=529 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.361 "name": "raid_bdev1", 00:15:38.361 "uuid": "ea1bf377-6261-44b0-9bf0-12e66102d33b", 00:15:38.361 "strip_size_kb": 64, 00:15:38.361 "state": "online", 00:15:38.361 "raid_level": "raid5f", 00:15:38.361 "superblock": false, 00:15:38.361 "num_base_bdevs": 3, 00:15:38.361 "num_base_bdevs_discovered": 3, 00:15:38.361 "num_base_bdevs_operational": 3, 00:15:38.361 "process": { 00:15:38.361 "type": "rebuild", 00:15:38.361 "target": "spare", 00:15:38.361 "progress": { 00:15:38.361 "blocks": 22528, 00:15:38.361 "percent": 17 00:15:38.361 } 00:15:38.361 }, 00:15:38.361 "base_bdevs_list": [ 00:15:38.361 { 00:15:38.361 "name": "spare", 00:15:38.361 "uuid": "96910e7a-5f8e-5471-b7b0-3f347ed7dad7", 00:15:38.361 "is_configured": true, 00:15:38.361 "data_offset": 0, 00:15:38.361 "data_size": 65536 00:15:38.361 }, 00:15:38.361 { 00:15:38.361 "name": "BaseBdev2", 00:15:38.361 "uuid": "f10bf066-3d69-5280-b56d-56eff36735f7", 00:15:38.361 "is_configured": true, 00:15:38.361 "data_offset": 0, 00:15:38.361 "data_size": 65536 00:15:38.361 }, 00:15:38.361 { 00:15:38.361 "name": "BaseBdev3", 00:15:38.361 "uuid": "36ed23ae-0c88-5a94-bcb9-b7bf12663d12", 00:15:38.361 "is_configured": true, 00:15:38.361 "data_offset": 0, 00:15:38.361 "data_size": 65536 00:15:38.361 } 00:15:38.361 ] 00:15:38.361 }' 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.361 15:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.361 15:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.361 15:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:39.742 15:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.742 15:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.742 15:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.742 15:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.742 15:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.742 15:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.742 15:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.742 15:42:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.742 15:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.742 15:42:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.742 15:42:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.742 15:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.742 "name": "raid_bdev1", 00:15:39.742 "uuid": "ea1bf377-6261-44b0-9bf0-12e66102d33b", 00:15:39.742 "strip_size_kb": 64, 00:15:39.742 "state": "online", 00:15:39.742 "raid_level": "raid5f", 00:15:39.742 "superblock": false, 00:15:39.742 "num_base_bdevs": 3, 00:15:39.742 "num_base_bdevs_discovered": 3, 00:15:39.742 "num_base_bdevs_operational": 3, 00:15:39.742 "process": { 00:15:39.742 "type": "rebuild", 00:15:39.742 "target": "spare", 00:15:39.742 "progress": { 00:15:39.742 "blocks": 45056, 00:15:39.742 "percent": 34 00:15:39.742 } 00:15:39.742 }, 00:15:39.742 "base_bdevs_list": [ 00:15:39.742 { 00:15:39.742 "name": "spare", 00:15:39.742 "uuid": "96910e7a-5f8e-5471-b7b0-3f347ed7dad7", 00:15:39.742 "is_configured": true, 00:15:39.742 "data_offset": 0, 00:15:39.742 "data_size": 65536 00:15:39.742 }, 00:15:39.742 { 00:15:39.742 "name": "BaseBdev2", 00:15:39.742 "uuid": "f10bf066-3d69-5280-b56d-56eff36735f7", 00:15:39.742 "is_configured": true, 00:15:39.742 "data_offset": 0, 00:15:39.742 "data_size": 65536 00:15:39.742 }, 00:15:39.742 { 00:15:39.742 "name": "BaseBdev3", 00:15:39.742 "uuid": "36ed23ae-0c88-5a94-bcb9-b7bf12663d12", 00:15:39.742 "is_configured": true, 00:15:39.742 "data_offset": 0, 00:15:39.742 "data_size": 65536 00:15:39.742 } 00:15:39.742 ] 00:15:39.742 }' 00:15:39.742 15:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.742 15:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.742 15:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.742 15:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.742 15:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:40.679 15:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:40.679 15:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.679 15:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.679 15:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.679 15:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.679 15:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.679 15:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.679 15:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.679 15:42:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.679 15:42:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.679 15:42:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.679 15:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.679 "name": "raid_bdev1", 00:15:40.679 "uuid": "ea1bf377-6261-44b0-9bf0-12e66102d33b", 00:15:40.679 "strip_size_kb": 64, 00:15:40.679 "state": "online", 00:15:40.679 "raid_level": "raid5f", 00:15:40.679 "superblock": false, 00:15:40.679 "num_base_bdevs": 3, 00:15:40.679 "num_base_bdevs_discovered": 3, 00:15:40.679 "num_base_bdevs_operational": 3, 00:15:40.679 "process": { 00:15:40.679 "type": "rebuild", 00:15:40.679 "target": "spare", 00:15:40.679 "progress": { 00:15:40.679 "blocks": 69632, 00:15:40.679 "percent": 53 00:15:40.679 } 00:15:40.679 }, 00:15:40.679 "base_bdevs_list": [ 00:15:40.679 { 00:15:40.679 "name": "spare", 00:15:40.679 "uuid": "96910e7a-5f8e-5471-b7b0-3f347ed7dad7", 00:15:40.679 "is_configured": true, 00:15:40.679 "data_offset": 0, 00:15:40.679 "data_size": 65536 00:15:40.679 }, 00:15:40.679 { 00:15:40.679 "name": "BaseBdev2", 00:15:40.679 "uuid": "f10bf066-3d69-5280-b56d-56eff36735f7", 00:15:40.679 "is_configured": true, 00:15:40.679 "data_offset": 0, 00:15:40.679 "data_size": 65536 00:15:40.679 }, 00:15:40.679 { 00:15:40.679 "name": "BaseBdev3", 00:15:40.679 "uuid": "36ed23ae-0c88-5a94-bcb9-b7bf12663d12", 00:15:40.679 "is_configured": true, 00:15:40.679 "data_offset": 0, 00:15:40.679 "data_size": 65536 00:15:40.679 } 00:15:40.679 ] 00:15:40.679 }' 00:15:40.679 15:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.679 15:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.679 15:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.679 15:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.679 15:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:42.061 15:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:42.061 15:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.061 15:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.061 15:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.061 15:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.061 15:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.061 15:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.061 15:42:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.061 15:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.061 15:42:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.061 15:42:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.061 15:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.061 "name": "raid_bdev1", 00:15:42.061 "uuid": "ea1bf377-6261-44b0-9bf0-12e66102d33b", 00:15:42.061 "strip_size_kb": 64, 00:15:42.061 "state": "online", 00:15:42.061 "raid_level": "raid5f", 00:15:42.061 "superblock": false, 00:15:42.061 "num_base_bdevs": 3, 00:15:42.061 "num_base_bdevs_discovered": 3, 00:15:42.061 "num_base_bdevs_operational": 3, 00:15:42.061 "process": { 00:15:42.061 "type": "rebuild", 00:15:42.061 "target": "spare", 00:15:42.061 "progress": { 00:15:42.061 "blocks": 92160, 00:15:42.061 "percent": 70 00:15:42.061 } 00:15:42.061 }, 00:15:42.061 "base_bdevs_list": [ 00:15:42.061 { 00:15:42.061 "name": "spare", 00:15:42.061 "uuid": "96910e7a-5f8e-5471-b7b0-3f347ed7dad7", 00:15:42.061 "is_configured": true, 00:15:42.061 "data_offset": 0, 00:15:42.061 "data_size": 65536 00:15:42.061 }, 00:15:42.061 { 00:15:42.061 "name": "BaseBdev2", 00:15:42.061 "uuid": "f10bf066-3d69-5280-b56d-56eff36735f7", 00:15:42.061 "is_configured": true, 00:15:42.061 "data_offset": 0, 00:15:42.061 "data_size": 65536 00:15:42.061 }, 00:15:42.061 { 00:15:42.061 "name": "BaseBdev3", 00:15:42.061 "uuid": "36ed23ae-0c88-5a94-bcb9-b7bf12663d12", 00:15:42.061 "is_configured": true, 00:15:42.061 "data_offset": 0, 00:15:42.061 "data_size": 65536 00:15:42.061 } 00:15:42.061 ] 00:15:42.061 }' 00:15:42.061 15:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.061 15:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.061 15:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.061 15:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.061 15:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:43.045 15:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:43.045 15:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.045 15:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.045 15:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.045 15:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.045 15:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.045 15:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.045 15:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.045 15:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.045 15:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.045 15:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.045 15:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.045 "name": "raid_bdev1", 00:15:43.045 "uuid": "ea1bf377-6261-44b0-9bf0-12e66102d33b", 00:15:43.045 "strip_size_kb": 64, 00:15:43.045 "state": "online", 00:15:43.045 "raid_level": "raid5f", 00:15:43.045 "superblock": false, 00:15:43.045 "num_base_bdevs": 3, 00:15:43.045 "num_base_bdevs_discovered": 3, 00:15:43.045 "num_base_bdevs_operational": 3, 00:15:43.045 "process": { 00:15:43.045 "type": "rebuild", 00:15:43.045 "target": "spare", 00:15:43.045 "progress": { 00:15:43.045 "blocks": 114688, 00:15:43.045 "percent": 87 00:15:43.045 } 00:15:43.045 }, 00:15:43.045 "base_bdevs_list": [ 00:15:43.045 { 00:15:43.045 "name": "spare", 00:15:43.045 "uuid": "96910e7a-5f8e-5471-b7b0-3f347ed7dad7", 00:15:43.045 "is_configured": true, 00:15:43.045 "data_offset": 0, 00:15:43.045 "data_size": 65536 00:15:43.045 }, 00:15:43.045 { 00:15:43.045 "name": "BaseBdev2", 00:15:43.045 "uuid": "f10bf066-3d69-5280-b56d-56eff36735f7", 00:15:43.045 "is_configured": true, 00:15:43.045 "data_offset": 0, 00:15:43.045 "data_size": 65536 00:15:43.045 }, 00:15:43.045 { 00:15:43.045 "name": "BaseBdev3", 00:15:43.045 "uuid": "36ed23ae-0c88-5a94-bcb9-b7bf12663d12", 00:15:43.045 "is_configured": true, 00:15:43.045 "data_offset": 0, 00:15:43.045 "data_size": 65536 00:15:43.045 } 00:15:43.045 ] 00:15:43.045 }' 00:15:43.045 15:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.045 15:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.045 15:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.045 15:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.045 15:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:43.615 [2024-11-25 15:42:42.187037] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:43.615 [2024-11-25 15:42:42.187156] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:43.615 [2024-11-25 15:42:42.187242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.184 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.184 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.184 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.184 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.184 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.184 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.184 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.184 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.184 15:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.184 15:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.184 15:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.184 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.184 "name": "raid_bdev1", 00:15:44.184 "uuid": "ea1bf377-6261-44b0-9bf0-12e66102d33b", 00:15:44.184 "strip_size_kb": 64, 00:15:44.184 "state": "online", 00:15:44.184 "raid_level": "raid5f", 00:15:44.184 "superblock": false, 00:15:44.184 "num_base_bdevs": 3, 00:15:44.184 "num_base_bdevs_discovered": 3, 00:15:44.184 "num_base_bdevs_operational": 3, 00:15:44.184 "base_bdevs_list": [ 00:15:44.184 { 00:15:44.184 "name": "spare", 00:15:44.184 "uuid": "96910e7a-5f8e-5471-b7b0-3f347ed7dad7", 00:15:44.184 "is_configured": true, 00:15:44.184 "data_offset": 0, 00:15:44.184 "data_size": 65536 00:15:44.184 }, 00:15:44.184 { 00:15:44.184 "name": "BaseBdev2", 00:15:44.184 "uuid": "f10bf066-3d69-5280-b56d-56eff36735f7", 00:15:44.184 "is_configured": true, 00:15:44.184 "data_offset": 0, 00:15:44.184 "data_size": 65536 00:15:44.184 }, 00:15:44.184 { 00:15:44.184 "name": "BaseBdev3", 00:15:44.184 "uuid": "36ed23ae-0c88-5a94-bcb9-b7bf12663d12", 00:15:44.184 "is_configured": true, 00:15:44.184 "data_offset": 0, 00:15:44.184 "data_size": 65536 00:15:44.184 } 00:15:44.185 ] 00:15:44.185 }' 00:15:44.185 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.185 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:44.185 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.185 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:44.185 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:44.185 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:44.185 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.185 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:44.185 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:44.185 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.185 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.185 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.185 15:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.185 15:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.185 15:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.185 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.185 "name": "raid_bdev1", 00:15:44.185 "uuid": "ea1bf377-6261-44b0-9bf0-12e66102d33b", 00:15:44.185 "strip_size_kb": 64, 00:15:44.185 "state": "online", 00:15:44.185 "raid_level": "raid5f", 00:15:44.185 "superblock": false, 00:15:44.185 "num_base_bdevs": 3, 00:15:44.185 "num_base_bdevs_discovered": 3, 00:15:44.185 "num_base_bdevs_operational": 3, 00:15:44.185 "base_bdevs_list": [ 00:15:44.185 { 00:15:44.185 "name": "spare", 00:15:44.185 "uuid": "96910e7a-5f8e-5471-b7b0-3f347ed7dad7", 00:15:44.185 "is_configured": true, 00:15:44.185 "data_offset": 0, 00:15:44.185 "data_size": 65536 00:15:44.185 }, 00:15:44.185 { 00:15:44.185 "name": "BaseBdev2", 00:15:44.185 "uuid": "f10bf066-3d69-5280-b56d-56eff36735f7", 00:15:44.185 "is_configured": true, 00:15:44.185 "data_offset": 0, 00:15:44.185 "data_size": 65536 00:15:44.185 }, 00:15:44.185 { 00:15:44.185 "name": "BaseBdev3", 00:15:44.185 "uuid": "36ed23ae-0c88-5a94-bcb9-b7bf12663d12", 00:15:44.185 "is_configured": true, 00:15:44.185 "data_offset": 0, 00:15:44.185 "data_size": 65536 00:15:44.185 } 00:15:44.185 ] 00:15:44.185 }' 00:15:44.185 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.185 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:44.185 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.445 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:44.445 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:44.445 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.445 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.445 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.445 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.445 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.445 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.445 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.445 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.445 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.445 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.445 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.445 15:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.445 15:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.445 15:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.445 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.445 "name": "raid_bdev1", 00:15:44.445 "uuid": "ea1bf377-6261-44b0-9bf0-12e66102d33b", 00:15:44.445 "strip_size_kb": 64, 00:15:44.445 "state": "online", 00:15:44.445 "raid_level": "raid5f", 00:15:44.445 "superblock": false, 00:15:44.445 "num_base_bdevs": 3, 00:15:44.445 "num_base_bdevs_discovered": 3, 00:15:44.445 "num_base_bdevs_operational": 3, 00:15:44.445 "base_bdevs_list": [ 00:15:44.445 { 00:15:44.445 "name": "spare", 00:15:44.445 "uuid": "96910e7a-5f8e-5471-b7b0-3f347ed7dad7", 00:15:44.445 "is_configured": true, 00:15:44.445 "data_offset": 0, 00:15:44.445 "data_size": 65536 00:15:44.445 }, 00:15:44.445 { 00:15:44.445 "name": "BaseBdev2", 00:15:44.445 "uuid": "f10bf066-3d69-5280-b56d-56eff36735f7", 00:15:44.445 "is_configured": true, 00:15:44.445 "data_offset": 0, 00:15:44.445 "data_size": 65536 00:15:44.445 }, 00:15:44.445 { 00:15:44.445 "name": "BaseBdev3", 00:15:44.445 "uuid": "36ed23ae-0c88-5a94-bcb9-b7bf12663d12", 00:15:44.445 "is_configured": true, 00:15:44.445 "data_offset": 0, 00:15:44.445 "data_size": 65536 00:15:44.445 } 00:15:44.445 ] 00:15:44.445 }' 00:15:44.445 15:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.445 15:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.705 [2024-11-25 15:42:43.327300] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:44.705 [2024-11-25 15:42:43.327328] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:44.705 [2024-11-25 15:42:43.327408] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.705 [2024-11-25 15:42:43.327482] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.705 [2024-11-25 15:42:43.327495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:44.705 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:44.966 /dev/nbd0 00:15:44.966 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:44.966 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:44.966 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:44.966 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:44.966 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.966 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.966 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:44.966 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:44.966 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.966 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.966 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.966 1+0 records in 00:15:44.966 1+0 records out 00:15:44.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281375 s, 14.6 MB/s 00:15:44.966 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.966 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:44.966 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.966 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.966 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:44.966 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:44.966 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:44.966 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:45.226 /dev/nbd1 00:15:45.226 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:45.226 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:45.226 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:45.226 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:45.226 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:45.226 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:45.226 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:45.226 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:45.226 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:45.226 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:45.226 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.226 1+0 records in 00:15:45.226 1+0 records out 00:15:45.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373946 s, 11.0 MB/s 00:15:45.226 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.226 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:45.226 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.226 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:45.226 15:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:45.226 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:45.226 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:45.226 15:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:45.486 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:45.486 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:45.486 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:45.486 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:45.486 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:45.486 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.486 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:45.746 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:45.746 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:45.746 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:45.746 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.746 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.746 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:45.746 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:45.746 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.746 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.746 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81198 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81198 ']' 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81198 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81198 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:46.006 killing process with pid 81198 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81198' 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81198 00:15:46.006 Received shutdown signal, test time was about 60.000000 seconds 00:15:46.006 00:15:46.006 Latency(us) 00:15:46.006 [2024-11-25T15:42:44.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.006 [2024-11-25T15:42:44.687Z] =================================================================================================================== 00:15:46.006 [2024-11-25T15:42:44.687Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:46.006 [2024-11-25 15:42:44.513908] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:46.006 15:42:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81198 00:15:46.266 [2024-11-25 15:42:44.893384] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:47.646 15:42:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:47.646 00:15:47.646 real 0m15.063s 00:15:47.646 user 0m18.434s 00:15:47.646 sys 0m2.013s 00:15:47.646 15:42:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.646 ************************************ 00:15:47.646 END TEST raid5f_rebuild_test 00:15:47.646 ************************************ 00:15:47.646 15:42:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.647 15:42:45 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:47.647 15:42:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:47.647 15:42:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.647 15:42:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:47.647 ************************************ 00:15:47.647 START TEST raid5f_rebuild_test_sb 00:15:47.647 ************************************ 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81641 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81641 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81641 ']' 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.647 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.647 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:47.647 Zero copy mechanism will not be used. 00:15:47.647 [2024-11-25 15:42:46.102678] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:15:47.647 [2024-11-25 15:42:46.102807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81641 ] 00:15:47.647 [2024-11-25 15:42:46.272518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.907 [2024-11-25 15:42:46.379619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.907 [2024-11-25 15:42:46.570188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.907 [2024-11-25 15:42:46.570220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.478 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.478 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:48.478 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:48.478 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:48.478 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.478 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.478 BaseBdev1_malloc 00:15:48.478 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.478 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:48.478 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.478 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.478 [2024-11-25 15:42:46.962055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:48.478 [2024-11-25 15:42:46.962119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.478 [2024-11-25 15:42:46.962143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:48.478 [2024-11-25 15:42:46.962154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.478 [2024-11-25 15:42:46.964202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.478 [2024-11-25 15:42:46.964240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:48.478 BaseBdev1 00:15:48.478 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.478 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:48.478 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:48.478 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.478 15:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.478 BaseBdev2_malloc 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.478 [2024-11-25 15:42:47.016639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:48.478 [2024-11-25 15:42:47.016695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.478 [2024-11-25 15:42:47.016711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:48.478 [2024-11-25 15:42:47.016723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.478 [2024-11-25 15:42:47.018674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.478 [2024-11-25 15:42:47.018711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:48.478 BaseBdev2 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.478 BaseBdev3_malloc 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.478 [2024-11-25 15:42:47.102892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:48.478 [2024-11-25 15:42:47.102942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.478 [2024-11-25 15:42:47.102977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:48.478 [2024-11-25 15:42:47.102987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.478 [2024-11-25 15:42:47.105001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.478 [2024-11-25 15:42:47.105046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:48.478 BaseBdev3 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.478 spare_malloc 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.478 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.738 spare_delay 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.738 [2024-11-25 15:42:47.168456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:48.738 [2024-11-25 15:42:47.168503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.738 [2024-11-25 15:42:47.168535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:48.738 [2024-11-25 15:42:47.168545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.738 [2024-11-25 15:42:47.170559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.738 [2024-11-25 15:42:47.170598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:48.738 spare 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.738 [2024-11-25 15:42:47.180498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.738 [2024-11-25 15:42:47.182201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.738 [2024-11-25 15:42:47.182260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.738 [2024-11-25 15:42:47.182430] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:48.738 [2024-11-25 15:42:47.182444] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:48.738 [2024-11-25 15:42:47.182709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:48.738 [2024-11-25 15:42:47.188224] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:48.738 [2024-11-25 15:42:47.188248] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:48.738 [2024-11-25 15:42:47.188427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.738 "name": "raid_bdev1", 00:15:48.738 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:15:48.738 "strip_size_kb": 64, 00:15:48.738 "state": "online", 00:15:48.738 "raid_level": "raid5f", 00:15:48.738 "superblock": true, 00:15:48.738 "num_base_bdevs": 3, 00:15:48.738 "num_base_bdevs_discovered": 3, 00:15:48.738 "num_base_bdevs_operational": 3, 00:15:48.738 "base_bdevs_list": [ 00:15:48.738 { 00:15:48.738 "name": "BaseBdev1", 00:15:48.738 "uuid": "788ba544-2d80-5792-aa8a-a353b84abc85", 00:15:48.738 "is_configured": true, 00:15:48.738 "data_offset": 2048, 00:15:48.738 "data_size": 63488 00:15:48.738 }, 00:15:48.738 { 00:15:48.738 "name": "BaseBdev2", 00:15:48.738 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:15:48.738 "is_configured": true, 00:15:48.738 "data_offset": 2048, 00:15:48.738 "data_size": 63488 00:15:48.738 }, 00:15:48.738 { 00:15:48.738 "name": "BaseBdev3", 00:15:48.738 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:15:48.738 "is_configured": true, 00:15:48.738 "data_offset": 2048, 00:15:48.738 "data_size": 63488 00:15:48.738 } 00:15:48.738 ] 00:15:48.738 }' 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.738 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:48.998 [2024-11-25 15:42:47.586267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:48.998 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:49.258 [2024-11-25 15:42:47.817745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:49.258 /dev/nbd0 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:49.258 1+0 records in 00:15:49.258 1+0 records out 00:15:49.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034606 s, 11.8 MB/s 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:49.258 15:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:49.517 496+0 records in 00:15:49.517 496+0 records out 00:15:49.517 65011712 bytes (65 MB, 62 MiB) copied, 0.301164 s, 216 MB/s 00:15:49.517 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:49.517 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:49.517 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:49.517 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:49.517 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:49.517 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.517 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:49.775 [2024-11-25 15:42:48.404099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.775 [2024-11-25 15:42:48.415719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.775 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.034 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.034 "name": "raid_bdev1", 00:15:50.034 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:15:50.034 "strip_size_kb": 64, 00:15:50.034 "state": "online", 00:15:50.034 "raid_level": "raid5f", 00:15:50.034 "superblock": true, 00:15:50.034 "num_base_bdevs": 3, 00:15:50.034 "num_base_bdevs_discovered": 2, 00:15:50.034 "num_base_bdevs_operational": 2, 00:15:50.034 "base_bdevs_list": [ 00:15:50.034 { 00:15:50.034 "name": null, 00:15:50.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.034 "is_configured": false, 00:15:50.034 "data_offset": 0, 00:15:50.034 "data_size": 63488 00:15:50.034 }, 00:15:50.034 { 00:15:50.034 "name": "BaseBdev2", 00:15:50.034 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:15:50.034 "is_configured": true, 00:15:50.034 "data_offset": 2048, 00:15:50.034 "data_size": 63488 00:15:50.034 }, 00:15:50.034 { 00:15:50.034 "name": "BaseBdev3", 00:15:50.034 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:15:50.034 "is_configured": true, 00:15:50.034 "data_offset": 2048, 00:15:50.034 "data_size": 63488 00:15:50.034 } 00:15:50.034 ] 00:15:50.034 }' 00:15:50.034 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.034 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.293 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:50.293 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.293 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.293 [2024-11-25 15:42:48.843035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:50.293 [2024-11-25 15:42:48.859844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:50.293 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.293 15:42:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:50.293 [2024-11-25 15:42:48.867087] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:51.231 15:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.231 15:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.231 15:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.231 15:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.231 15:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.231 15:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.231 15:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.231 15:42:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.231 15:42:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.231 15:42:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.489 15:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.489 "name": "raid_bdev1", 00:15:51.489 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:15:51.489 "strip_size_kb": 64, 00:15:51.489 "state": "online", 00:15:51.489 "raid_level": "raid5f", 00:15:51.489 "superblock": true, 00:15:51.489 "num_base_bdevs": 3, 00:15:51.489 "num_base_bdevs_discovered": 3, 00:15:51.489 "num_base_bdevs_operational": 3, 00:15:51.489 "process": { 00:15:51.489 "type": "rebuild", 00:15:51.489 "target": "spare", 00:15:51.489 "progress": { 00:15:51.489 "blocks": 20480, 00:15:51.489 "percent": 16 00:15:51.489 } 00:15:51.489 }, 00:15:51.489 "base_bdevs_list": [ 00:15:51.489 { 00:15:51.489 "name": "spare", 00:15:51.489 "uuid": "7730a534-3ac4-5811-aad6-0bceded35884", 00:15:51.489 "is_configured": true, 00:15:51.489 "data_offset": 2048, 00:15:51.489 "data_size": 63488 00:15:51.489 }, 00:15:51.489 { 00:15:51.489 "name": "BaseBdev2", 00:15:51.489 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:15:51.489 "is_configured": true, 00:15:51.489 "data_offset": 2048, 00:15:51.489 "data_size": 63488 00:15:51.489 }, 00:15:51.489 { 00:15:51.490 "name": "BaseBdev3", 00:15:51.490 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:15:51.490 "is_configured": true, 00:15:51.490 "data_offset": 2048, 00:15:51.490 "data_size": 63488 00:15:51.490 } 00:15:51.490 ] 00:15:51.490 }' 00:15:51.490 15:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.490 15:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:51.490 15:42:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.490 [2024-11-25 15:42:50.014293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:51.490 [2024-11-25 15:42:50.074399] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:51.490 [2024-11-25 15:42:50.074453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.490 [2024-11-25 15:42:50.074486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:51.490 [2024-11-25 15:42:50.074494] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.490 "name": "raid_bdev1", 00:15:51.490 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:15:51.490 "strip_size_kb": 64, 00:15:51.490 "state": "online", 00:15:51.490 "raid_level": "raid5f", 00:15:51.490 "superblock": true, 00:15:51.490 "num_base_bdevs": 3, 00:15:51.490 "num_base_bdevs_discovered": 2, 00:15:51.490 "num_base_bdevs_operational": 2, 00:15:51.490 "base_bdevs_list": [ 00:15:51.490 { 00:15:51.490 "name": null, 00:15:51.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.490 "is_configured": false, 00:15:51.490 "data_offset": 0, 00:15:51.490 "data_size": 63488 00:15:51.490 }, 00:15:51.490 { 00:15:51.490 "name": "BaseBdev2", 00:15:51.490 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:15:51.490 "is_configured": true, 00:15:51.490 "data_offset": 2048, 00:15:51.490 "data_size": 63488 00:15:51.490 }, 00:15:51.490 { 00:15:51.490 "name": "BaseBdev3", 00:15:51.490 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:15:51.490 "is_configured": true, 00:15:51.490 "data_offset": 2048, 00:15:51.490 "data_size": 63488 00:15:51.490 } 00:15:51.490 ] 00:15:51.490 }' 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.490 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.059 "name": "raid_bdev1", 00:15:52.059 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:15:52.059 "strip_size_kb": 64, 00:15:52.059 "state": "online", 00:15:52.059 "raid_level": "raid5f", 00:15:52.059 "superblock": true, 00:15:52.059 "num_base_bdevs": 3, 00:15:52.059 "num_base_bdevs_discovered": 2, 00:15:52.059 "num_base_bdevs_operational": 2, 00:15:52.059 "base_bdevs_list": [ 00:15:52.059 { 00:15:52.059 "name": null, 00:15:52.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.059 "is_configured": false, 00:15:52.059 "data_offset": 0, 00:15:52.059 "data_size": 63488 00:15:52.059 }, 00:15:52.059 { 00:15:52.059 "name": "BaseBdev2", 00:15:52.059 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:15:52.059 "is_configured": true, 00:15:52.059 "data_offset": 2048, 00:15:52.059 "data_size": 63488 00:15:52.059 }, 00:15:52.059 { 00:15:52.059 "name": "BaseBdev3", 00:15:52.059 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:15:52.059 "is_configured": true, 00:15:52.059 "data_offset": 2048, 00:15:52.059 "data_size": 63488 00:15:52.059 } 00:15:52.059 ] 00:15:52.059 }' 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.059 [2024-11-25 15:42:50.711043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:52.059 [2024-11-25 15:42:50.726945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.059 15:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:52.059 [2024-11-25 15:42:50.734346] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:53.437 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.437 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.437 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.437 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.437 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.437 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.437 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.438 "name": "raid_bdev1", 00:15:53.438 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:15:53.438 "strip_size_kb": 64, 00:15:53.438 "state": "online", 00:15:53.438 "raid_level": "raid5f", 00:15:53.438 "superblock": true, 00:15:53.438 "num_base_bdevs": 3, 00:15:53.438 "num_base_bdevs_discovered": 3, 00:15:53.438 "num_base_bdevs_operational": 3, 00:15:53.438 "process": { 00:15:53.438 "type": "rebuild", 00:15:53.438 "target": "spare", 00:15:53.438 "progress": { 00:15:53.438 "blocks": 20480, 00:15:53.438 "percent": 16 00:15:53.438 } 00:15:53.438 }, 00:15:53.438 "base_bdevs_list": [ 00:15:53.438 { 00:15:53.438 "name": "spare", 00:15:53.438 "uuid": "7730a534-3ac4-5811-aad6-0bceded35884", 00:15:53.438 "is_configured": true, 00:15:53.438 "data_offset": 2048, 00:15:53.438 "data_size": 63488 00:15:53.438 }, 00:15:53.438 { 00:15:53.438 "name": "BaseBdev2", 00:15:53.438 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:15:53.438 "is_configured": true, 00:15:53.438 "data_offset": 2048, 00:15:53.438 "data_size": 63488 00:15:53.438 }, 00:15:53.438 { 00:15:53.438 "name": "BaseBdev3", 00:15:53.438 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:15:53.438 "is_configured": true, 00:15:53.438 "data_offset": 2048, 00:15:53.438 "data_size": 63488 00:15:53.438 } 00:15:53.438 ] 00:15:53.438 }' 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:53.438 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=544 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.438 "name": "raid_bdev1", 00:15:53.438 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:15:53.438 "strip_size_kb": 64, 00:15:53.438 "state": "online", 00:15:53.438 "raid_level": "raid5f", 00:15:53.438 "superblock": true, 00:15:53.438 "num_base_bdevs": 3, 00:15:53.438 "num_base_bdevs_discovered": 3, 00:15:53.438 "num_base_bdevs_operational": 3, 00:15:53.438 "process": { 00:15:53.438 "type": "rebuild", 00:15:53.438 "target": "spare", 00:15:53.438 "progress": { 00:15:53.438 "blocks": 22528, 00:15:53.438 "percent": 17 00:15:53.438 } 00:15:53.438 }, 00:15:53.438 "base_bdevs_list": [ 00:15:53.438 { 00:15:53.438 "name": "spare", 00:15:53.438 "uuid": "7730a534-3ac4-5811-aad6-0bceded35884", 00:15:53.438 "is_configured": true, 00:15:53.438 "data_offset": 2048, 00:15:53.438 "data_size": 63488 00:15:53.438 }, 00:15:53.438 { 00:15:53.438 "name": "BaseBdev2", 00:15:53.438 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:15:53.438 "is_configured": true, 00:15:53.438 "data_offset": 2048, 00:15:53.438 "data_size": 63488 00:15:53.438 }, 00:15:53.438 { 00:15:53.438 "name": "BaseBdev3", 00:15:53.438 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:15:53.438 "is_configured": true, 00:15:53.438 "data_offset": 2048, 00:15:53.438 "data_size": 63488 00:15:53.438 } 00:15:53.438 ] 00:15:53.438 }' 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.438 15:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:54.376 15:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:54.376 15:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.376 15:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.376 15:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.376 15:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.376 15:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.376 15:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.376 15:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.377 15:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.377 15:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.377 15:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.377 15:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.377 "name": "raid_bdev1", 00:15:54.377 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:15:54.377 "strip_size_kb": 64, 00:15:54.377 "state": "online", 00:15:54.377 "raid_level": "raid5f", 00:15:54.377 "superblock": true, 00:15:54.377 "num_base_bdevs": 3, 00:15:54.377 "num_base_bdevs_discovered": 3, 00:15:54.377 "num_base_bdevs_operational": 3, 00:15:54.377 "process": { 00:15:54.377 "type": "rebuild", 00:15:54.377 "target": "spare", 00:15:54.377 "progress": { 00:15:54.377 "blocks": 45056, 00:15:54.377 "percent": 35 00:15:54.377 } 00:15:54.377 }, 00:15:54.377 "base_bdevs_list": [ 00:15:54.377 { 00:15:54.377 "name": "spare", 00:15:54.377 "uuid": "7730a534-3ac4-5811-aad6-0bceded35884", 00:15:54.377 "is_configured": true, 00:15:54.377 "data_offset": 2048, 00:15:54.377 "data_size": 63488 00:15:54.377 }, 00:15:54.377 { 00:15:54.377 "name": "BaseBdev2", 00:15:54.377 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:15:54.377 "is_configured": true, 00:15:54.377 "data_offset": 2048, 00:15:54.377 "data_size": 63488 00:15:54.377 }, 00:15:54.377 { 00:15:54.377 "name": "BaseBdev3", 00:15:54.377 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:15:54.377 "is_configured": true, 00:15:54.377 "data_offset": 2048, 00:15:54.377 "data_size": 63488 00:15:54.377 } 00:15:54.377 ] 00:15:54.377 }' 00:15:54.377 15:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.377 15:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.377 15:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.637 15:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.637 15:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:55.577 15:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:55.577 15:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.577 15:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.577 15:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.577 15:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.577 15:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.577 15:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.577 15:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.577 15:42:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.577 15:42:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.577 15:42:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.577 15:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.577 "name": "raid_bdev1", 00:15:55.577 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:15:55.577 "strip_size_kb": 64, 00:15:55.577 "state": "online", 00:15:55.577 "raid_level": "raid5f", 00:15:55.577 "superblock": true, 00:15:55.577 "num_base_bdevs": 3, 00:15:55.577 "num_base_bdevs_discovered": 3, 00:15:55.577 "num_base_bdevs_operational": 3, 00:15:55.577 "process": { 00:15:55.577 "type": "rebuild", 00:15:55.577 "target": "spare", 00:15:55.577 "progress": { 00:15:55.577 "blocks": 67584, 00:15:55.577 "percent": 53 00:15:55.577 } 00:15:55.577 }, 00:15:55.577 "base_bdevs_list": [ 00:15:55.577 { 00:15:55.577 "name": "spare", 00:15:55.577 "uuid": "7730a534-3ac4-5811-aad6-0bceded35884", 00:15:55.577 "is_configured": true, 00:15:55.577 "data_offset": 2048, 00:15:55.577 "data_size": 63488 00:15:55.577 }, 00:15:55.577 { 00:15:55.577 "name": "BaseBdev2", 00:15:55.577 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:15:55.577 "is_configured": true, 00:15:55.577 "data_offset": 2048, 00:15:55.577 "data_size": 63488 00:15:55.577 }, 00:15:55.577 { 00:15:55.577 "name": "BaseBdev3", 00:15:55.577 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:15:55.577 "is_configured": true, 00:15:55.577 "data_offset": 2048, 00:15:55.577 "data_size": 63488 00:15:55.577 } 00:15:55.577 ] 00:15:55.577 }' 00:15:55.577 15:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.577 15:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.577 15:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.577 15:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.577 15:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:56.958 15:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:56.958 15:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.958 15:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.958 15:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.958 15:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.958 15:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.958 15:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.958 15:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.958 15:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.958 15:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.958 15:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.958 15:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.958 "name": "raid_bdev1", 00:15:56.958 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:15:56.958 "strip_size_kb": 64, 00:15:56.958 "state": "online", 00:15:56.958 "raid_level": "raid5f", 00:15:56.958 "superblock": true, 00:15:56.958 "num_base_bdevs": 3, 00:15:56.958 "num_base_bdevs_discovered": 3, 00:15:56.958 "num_base_bdevs_operational": 3, 00:15:56.958 "process": { 00:15:56.958 "type": "rebuild", 00:15:56.958 "target": "spare", 00:15:56.958 "progress": { 00:15:56.958 "blocks": 92160, 00:15:56.958 "percent": 72 00:15:56.958 } 00:15:56.958 }, 00:15:56.958 "base_bdevs_list": [ 00:15:56.958 { 00:15:56.958 "name": "spare", 00:15:56.958 "uuid": "7730a534-3ac4-5811-aad6-0bceded35884", 00:15:56.958 "is_configured": true, 00:15:56.958 "data_offset": 2048, 00:15:56.958 "data_size": 63488 00:15:56.958 }, 00:15:56.958 { 00:15:56.958 "name": "BaseBdev2", 00:15:56.958 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:15:56.958 "is_configured": true, 00:15:56.958 "data_offset": 2048, 00:15:56.958 "data_size": 63488 00:15:56.958 }, 00:15:56.958 { 00:15:56.958 "name": "BaseBdev3", 00:15:56.958 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:15:56.958 "is_configured": true, 00:15:56.958 "data_offset": 2048, 00:15:56.958 "data_size": 63488 00:15:56.958 } 00:15:56.958 ] 00:15:56.958 }' 00:15:56.958 15:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.958 15:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.958 15:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.958 15:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.958 15:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:57.898 15:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:57.898 15:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.898 15:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.898 15:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.898 15:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.898 15:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.898 15:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.898 15:42:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.898 15:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.898 15:42:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.898 15:42:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.898 15:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.898 "name": "raid_bdev1", 00:15:57.898 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:15:57.898 "strip_size_kb": 64, 00:15:57.898 "state": "online", 00:15:57.898 "raid_level": "raid5f", 00:15:57.898 "superblock": true, 00:15:57.898 "num_base_bdevs": 3, 00:15:57.898 "num_base_bdevs_discovered": 3, 00:15:57.898 "num_base_bdevs_operational": 3, 00:15:57.898 "process": { 00:15:57.898 "type": "rebuild", 00:15:57.898 "target": "spare", 00:15:57.898 "progress": { 00:15:57.898 "blocks": 114688, 00:15:57.898 "percent": 90 00:15:57.898 } 00:15:57.898 }, 00:15:57.898 "base_bdevs_list": [ 00:15:57.898 { 00:15:57.898 "name": "spare", 00:15:57.898 "uuid": "7730a534-3ac4-5811-aad6-0bceded35884", 00:15:57.898 "is_configured": true, 00:15:57.898 "data_offset": 2048, 00:15:57.898 "data_size": 63488 00:15:57.898 }, 00:15:57.898 { 00:15:57.898 "name": "BaseBdev2", 00:15:57.898 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:15:57.898 "is_configured": true, 00:15:57.898 "data_offset": 2048, 00:15:57.898 "data_size": 63488 00:15:57.898 }, 00:15:57.898 { 00:15:57.898 "name": "BaseBdev3", 00:15:57.898 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:15:57.898 "is_configured": true, 00:15:57.898 "data_offset": 2048, 00:15:57.898 "data_size": 63488 00:15:57.898 } 00:15:57.898 ] 00:15:57.898 }' 00:15:57.898 15:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.898 15:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.898 15:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.898 15:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.898 15:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:58.468 [2024-11-25 15:42:56.968835] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:58.468 [2024-11-25 15:42:56.968913] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:58.468 [2024-11-25 15:42:56.969004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.038 "name": "raid_bdev1", 00:15:59.038 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:15:59.038 "strip_size_kb": 64, 00:15:59.038 "state": "online", 00:15:59.038 "raid_level": "raid5f", 00:15:59.038 "superblock": true, 00:15:59.038 "num_base_bdevs": 3, 00:15:59.038 "num_base_bdevs_discovered": 3, 00:15:59.038 "num_base_bdevs_operational": 3, 00:15:59.038 "base_bdevs_list": [ 00:15:59.038 { 00:15:59.038 "name": "spare", 00:15:59.038 "uuid": "7730a534-3ac4-5811-aad6-0bceded35884", 00:15:59.038 "is_configured": true, 00:15:59.038 "data_offset": 2048, 00:15:59.038 "data_size": 63488 00:15:59.038 }, 00:15:59.038 { 00:15:59.038 "name": "BaseBdev2", 00:15:59.038 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:15:59.038 "is_configured": true, 00:15:59.038 "data_offset": 2048, 00:15:59.038 "data_size": 63488 00:15:59.038 }, 00:15:59.038 { 00:15:59.038 "name": "BaseBdev3", 00:15:59.038 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:15:59.038 "is_configured": true, 00:15:59.038 "data_offset": 2048, 00:15:59.038 "data_size": 63488 00:15:59.038 } 00:15:59.038 ] 00:15:59.038 }' 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.038 "name": "raid_bdev1", 00:15:59.038 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:15:59.038 "strip_size_kb": 64, 00:15:59.038 "state": "online", 00:15:59.038 "raid_level": "raid5f", 00:15:59.038 "superblock": true, 00:15:59.038 "num_base_bdevs": 3, 00:15:59.038 "num_base_bdevs_discovered": 3, 00:15:59.038 "num_base_bdevs_operational": 3, 00:15:59.038 "base_bdevs_list": [ 00:15:59.038 { 00:15:59.038 "name": "spare", 00:15:59.038 "uuid": "7730a534-3ac4-5811-aad6-0bceded35884", 00:15:59.038 "is_configured": true, 00:15:59.038 "data_offset": 2048, 00:15:59.038 "data_size": 63488 00:15:59.038 }, 00:15:59.038 { 00:15:59.038 "name": "BaseBdev2", 00:15:59.038 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:15:59.038 "is_configured": true, 00:15:59.038 "data_offset": 2048, 00:15:59.038 "data_size": 63488 00:15:59.038 }, 00:15:59.038 { 00:15:59.038 "name": "BaseBdev3", 00:15:59.038 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:15:59.038 "is_configured": true, 00:15:59.038 "data_offset": 2048, 00:15:59.038 "data_size": 63488 00:15:59.038 } 00:15:59.038 ] 00:15:59.038 }' 00:15:59.038 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.298 "name": "raid_bdev1", 00:15:59.298 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:15:59.298 "strip_size_kb": 64, 00:15:59.298 "state": "online", 00:15:59.298 "raid_level": "raid5f", 00:15:59.298 "superblock": true, 00:15:59.298 "num_base_bdevs": 3, 00:15:59.298 "num_base_bdevs_discovered": 3, 00:15:59.298 "num_base_bdevs_operational": 3, 00:15:59.298 "base_bdevs_list": [ 00:15:59.298 { 00:15:59.298 "name": "spare", 00:15:59.298 "uuid": "7730a534-3ac4-5811-aad6-0bceded35884", 00:15:59.298 "is_configured": true, 00:15:59.298 "data_offset": 2048, 00:15:59.298 "data_size": 63488 00:15:59.298 }, 00:15:59.298 { 00:15:59.298 "name": "BaseBdev2", 00:15:59.298 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:15:59.298 "is_configured": true, 00:15:59.298 "data_offset": 2048, 00:15:59.298 "data_size": 63488 00:15:59.298 }, 00:15:59.298 { 00:15:59.298 "name": "BaseBdev3", 00:15:59.298 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:15:59.298 "is_configured": true, 00:15:59.298 "data_offset": 2048, 00:15:59.298 "data_size": 63488 00:15:59.298 } 00:15:59.298 ] 00:15:59.298 }' 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.298 15:42:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.869 [2024-11-25 15:42:58.261552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.869 [2024-11-25 15:42:58.261585] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:59.869 [2024-11-25 15:42:58.261670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.869 [2024-11-25 15:42:58.261752] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:59.869 [2024-11-25 15:42:58.261771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:59.869 /dev/nbd0 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.869 1+0 records in 00:15:59.869 1+0 records out 00:15:59.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379126 s, 10.8 MB/s 00:15:59.869 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:00.130 /dev/nbd1 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.130 1+0 records in 00:16:00.130 1+0 records out 00:16:00.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270636 s, 15.1 MB/s 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:00.130 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:00.390 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:00.390 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:00.390 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:00.390 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:00.390 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:00.390 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.390 15:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:00.652 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:00.652 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:00.652 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:00.652 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.652 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.652 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:00.652 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:00.652 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.652 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.652 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.911 [2024-11-25 15:42:59.386431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:00.911 [2024-11-25 15:42:59.386490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.911 [2024-11-25 15:42:59.386508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:00.911 [2024-11-25 15:42:59.386518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.911 [2024-11-25 15:42:59.388751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.911 [2024-11-25 15:42:59.388794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:00.911 [2024-11-25 15:42:59.388876] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:00.911 [2024-11-25 15:42:59.388945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:00.911 [2024-11-25 15:42:59.389116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:00.911 [2024-11-25 15:42:59.389225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:00.911 spare 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.911 [2024-11-25 15:42:59.489121] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:00.911 [2024-11-25 15:42:59.489151] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:00.911 [2024-11-25 15:42:59.489440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:00.911 [2024-11-25 15:42:59.494596] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:00.911 [2024-11-25 15:42:59.494619] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:00.911 [2024-11-25 15:42:59.494789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.911 "name": "raid_bdev1", 00:16:00.911 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:16:00.911 "strip_size_kb": 64, 00:16:00.911 "state": "online", 00:16:00.911 "raid_level": "raid5f", 00:16:00.911 "superblock": true, 00:16:00.911 "num_base_bdevs": 3, 00:16:00.911 "num_base_bdevs_discovered": 3, 00:16:00.911 "num_base_bdevs_operational": 3, 00:16:00.911 "base_bdevs_list": [ 00:16:00.911 { 00:16:00.911 "name": "spare", 00:16:00.911 "uuid": "7730a534-3ac4-5811-aad6-0bceded35884", 00:16:00.911 "is_configured": true, 00:16:00.911 "data_offset": 2048, 00:16:00.911 "data_size": 63488 00:16:00.911 }, 00:16:00.911 { 00:16:00.911 "name": "BaseBdev2", 00:16:00.911 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:16:00.911 "is_configured": true, 00:16:00.911 "data_offset": 2048, 00:16:00.911 "data_size": 63488 00:16:00.911 }, 00:16:00.911 { 00:16:00.911 "name": "BaseBdev3", 00:16:00.911 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:16:00.911 "is_configured": true, 00:16:00.911 "data_offset": 2048, 00:16:00.911 "data_size": 63488 00:16:00.911 } 00:16:00.911 ] 00:16:00.911 }' 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.911 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.480 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:01.480 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.480 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:01.480 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:01.480 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.480 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.480 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.480 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.480 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.480 15:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.481 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.481 "name": "raid_bdev1", 00:16:01.481 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:16:01.481 "strip_size_kb": 64, 00:16:01.481 "state": "online", 00:16:01.481 "raid_level": "raid5f", 00:16:01.481 "superblock": true, 00:16:01.481 "num_base_bdevs": 3, 00:16:01.481 "num_base_bdevs_discovered": 3, 00:16:01.481 "num_base_bdevs_operational": 3, 00:16:01.481 "base_bdevs_list": [ 00:16:01.481 { 00:16:01.481 "name": "spare", 00:16:01.481 "uuid": "7730a534-3ac4-5811-aad6-0bceded35884", 00:16:01.481 "is_configured": true, 00:16:01.481 "data_offset": 2048, 00:16:01.481 "data_size": 63488 00:16:01.481 }, 00:16:01.481 { 00:16:01.481 "name": "BaseBdev2", 00:16:01.481 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:16:01.481 "is_configured": true, 00:16:01.481 "data_offset": 2048, 00:16:01.481 "data_size": 63488 00:16:01.481 }, 00:16:01.481 { 00:16:01.481 "name": "BaseBdev3", 00:16:01.481 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:16:01.481 "is_configured": true, 00:16:01.481 "data_offset": 2048, 00:16:01.481 "data_size": 63488 00:16:01.481 } 00:16:01.481 ] 00:16:01.481 }' 00:16:01.481 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.481 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:01.481 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.481 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:01.481 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.481 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.481 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.481 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:01.481 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.481 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.481 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:01.481 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.481 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.481 [2024-11-25 15:43:00.155862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.741 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.741 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:01.741 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.741 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.741 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.741 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.741 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:01.741 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.741 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.741 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.741 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.741 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.741 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.741 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.741 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.741 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.741 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.741 "name": "raid_bdev1", 00:16:01.741 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:16:01.741 "strip_size_kb": 64, 00:16:01.741 "state": "online", 00:16:01.741 "raid_level": "raid5f", 00:16:01.741 "superblock": true, 00:16:01.741 "num_base_bdevs": 3, 00:16:01.741 "num_base_bdevs_discovered": 2, 00:16:01.741 "num_base_bdevs_operational": 2, 00:16:01.741 "base_bdevs_list": [ 00:16:01.741 { 00:16:01.741 "name": null, 00:16:01.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.741 "is_configured": false, 00:16:01.741 "data_offset": 0, 00:16:01.741 "data_size": 63488 00:16:01.741 }, 00:16:01.741 { 00:16:01.741 "name": "BaseBdev2", 00:16:01.741 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:16:01.741 "is_configured": true, 00:16:01.741 "data_offset": 2048, 00:16:01.741 "data_size": 63488 00:16:01.741 }, 00:16:01.741 { 00:16:01.741 "name": "BaseBdev3", 00:16:01.741 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:16:01.741 "is_configured": true, 00:16:01.741 "data_offset": 2048, 00:16:01.741 "data_size": 63488 00:16:01.741 } 00:16:01.741 ] 00:16:01.741 }' 00:16:01.741 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.741 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.001 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:02.001 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.001 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.001 [2024-11-25 15:43:00.623120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:02.001 [2024-11-25 15:43:00.623306] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:02.001 [2024-11-25 15:43:00.623331] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:02.001 [2024-11-25 15:43:00.623364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:02.001 [2024-11-25 15:43:00.639179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:02.001 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.001 15:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:02.001 [2024-11-25 15:43:00.646249] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:03.388 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.388 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.388 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.388 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.389 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.389 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.389 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.389 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.389 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.389 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.389 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.389 "name": "raid_bdev1", 00:16:03.389 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:16:03.389 "strip_size_kb": 64, 00:16:03.389 "state": "online", 00:16:03.389 "raid_level": "raid5f", 00:16:03.389 "superblock": true, 00:16:03.389 "num_base_bdevs": 3, 00:16:03.389 "num_base_bdevs_discovered": 3, 00:16:03.389 "num_base_bdevs_operational": 3, 00:16:03.389 "process": { 00:16:03.389 "type": "rebuild", 00:16:03.389 "target": "spare", 00:16:03.389 "progress": { 00:16:03.389 "blocks": 20480, 00:16:03.389 "percent": 16 00:16:03.389 } 00:16:03.389 }, 00:16:03.389 "base_bdevs_list": [ 00:16:03.389 { 00:16:03.389 "name": "spare", 00:16:03.389 "uuid": "7730a534-3ac4-5811-aad6-0bceded35884", 00:16:03.389 "is_configured": true, 00:16:03.390 "data_offset": 2048, 00:16:03.390 "data_size": 63488 00:16:03.390 }, 00:16:03.390 { 00:16:03.390 "name": "BaseBdev2", 00:16:03.390 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:16:03.390 "is_configured": true, 00:16:03.390 "data_offset": 2048, 00:16:03.390 "data_size": 63488 00:16:03.390 }, 00:16:03.390 { 00:16:03.390 "name": "BaseBdev3", 00:16:03.390 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:16:03.390 "is_configured": true, 00:16:03.390 "data_offset": 2048, 00:16:03.390 "data_size": 63488 00:16:03.390 } 00:16:03.390 ] 00:16:03.390 }' 00:16:03.390 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.390 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.390 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.390 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.390 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:03.390 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.390 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.390 [2024-11-25 15:43:01.789567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:03.390 [2024-11-25 15:43:01.853616] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:03.390 [2024-11-25 15:43:01.853673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.390 [2024-11-25 15:43:01.853703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:03.391 [2024-11-25 15:43:01.853712] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:03.391 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.391 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:03.391 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.391 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.391 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.392 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.392 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:03.392 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.392 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.392 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.392 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.392 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.392 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.392 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.392 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.392 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.393 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.393 "name": "raid_bdev1", 00:16:03.393 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:16:03.393 "strip_size_kb": 64, 00:16:03.393 "state": "online", 00:16:03.393 "raid_level": "raid5f", 00:16:03.393 "superblock": true, 00:16:03.393 "num_base_bdevs": 3, 00:16:03.393 "num_base_bdevs_discovered": 2, 00:16:03.393 "num_base_bdevs_operational": 2, 00:16:03.393 "base_bdevs_list": [ 00:16:03.393 { 00:16:03.393 "name": null, 00:16:03.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.394 "is_configured": false, 00:16:03.394 "data_offset": 0, 00:16:03.394 "data_size": 63488 00:16:03.394 }, 00:16:03.394 { 00:16:03.394 "name": "BaseBdev2", 00:16:03.394 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:16:03.394 "is_configured": true, 00:16:03.394 "data_offset": 2048, 00:16:03.394 "data_size": 63488 00:16:03.394 }, 00:16:03.394 { 00:16:03.394 "name": "BaseBdev3", 00:16:03.394 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:16:03.394 "is_configured": true, 00:16:03.394 "data_offset": 2048, 00:16:03.394 "data_size": 63488 00:16:03.394 } 00:16:03.394 ] 00:16:03.394 }' 00:16:03.394 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.394 15:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.981 15:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:03.981 15:43:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.981 15:43:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.981 [2024-11-25 15:43:02.358229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:03.981 [2024-11-25 15:43:02.358292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.981 [2024-11-25 15:43:02.358312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:03.981 [2024-11-25 15:43:02.358324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.981 [2024-11-25 15:43:02.358776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.981 [2024-11-25 15:43:02.358805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:03.981 [2024-11-25 15:43:02.358896] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:03.981 [2024-11-25 15:43:02.358918] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:03.981 [2024-11-25 15:43:02.358927] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:03.981 [2024-11-25 15:43:02.358950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:03.981 [2024-11-25 15:43:02.374152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:03.981 spare 00:16:03.981 15:43:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.981 15:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:03.981 [2024-11-25 15:43:02.381430] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:04.932 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.932 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.932 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.932 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.932 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.932 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.932 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.932 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.932 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.932 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.932 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.932 "name": "raid_bdev1", 00:16:04.932 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:16:04.932 "strip_size_kb": 64, 00:16:04.932 "state": "online", 00:16:04.932 "raid_level": "raid5f", 00:16:04.932 "superblock": true, 00:16:04.932 "num_base_bdevs": 3, 00:16:04.932 "num_base_bdevs_discovered": 3, 00:16:04.932 "num_base_bdevs_operational": 3, 00:16:04.932 "process": { 00:16:04.932 "type": "rebuild", 00:16:04.932 "target": "spare", 00:16:04.932 "progress": { 00:16:04.932 "blocks": 20480, 00:16:04.932 "percent": 16 00:16:04.932 } 00:16:04.932 }, 00:16:04.932 "base_bdevs_list": [ 00:16:04.932 { 00:16:04.932 "name": "spare", 00:16:04.932 "uuid": "7730a534-3ac4-5811-aad6-0bceded35884", 00:16:04.932 "is_configured": true, 00:16:04.932 "data_offset": 2048, 00:16:04.932 "data_size": 63488 00:16:04.932 }, 00:16:04.932 { 00:16:04.932 "name": "BaseBdev2", 00:16:04.932 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:16:04.932 "is_configured": true, 00:16:04.932 "data_offset": 2048, 00:16:04.932 "data_size": 63488 00:16:04.932 }, 00:16:04.932 { 00:16:04.932 "name": "BaseBdev3", 00:16:04.932 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:16:04.932 "is_configured": true, 00:16:04.932 "data_offset": 2048, 00:16:04.932 "data_size": 63488 00:16:04.932 } 00:16:04.932 ] 00:16:04.932 }' 00:16:04.932 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.932 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.932 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.932 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.932 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:04.932 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.932 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.932 [2024-11-25 15:43:03.536639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.932 [2024-11-25 15:43:03.588710] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:04.932 [2024-11-25 15:43:03.588762] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.932 [2024-11-25 15:43:03.588778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.932 [2024-11-25 15:43:03.588784] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:05.192 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.192 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:05.192 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.192 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.192 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.192 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.192 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:05.192 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.192 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.192 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.192 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.192 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.192 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.192 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.192 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.192 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.192 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.192 "name": "raid_bdev1", 00:16:05.192 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:16:05.192 "strip_size_kb": 64, 00:16:05.192 "state": "online", 00:16:05.192 "raid_level": "raid5f", 00:16:05.192 "superblock": true, 00:16:05.192 "num_base_bdevs": 3, 00:16:05.192 "num_base_bdevs_discovered": 2, 00:16:05.192 "num_base_bdevs_operational": 2, 00:16:05.192 "base_bdevs_list": [ 00:16:05.192 { 00:16:05.192 "name": null, 00:16:05.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.192 "is_configured": false, 00:16:05.192 "data_offset": 0, 00:16:05.192 "data_size": 63488 00:16:05.192 }, 00:16:05.192 { 00:16:05.192 "name": "BaseBdev2", 00:16:05.192 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:16:05.192 "is_configured": true, 00:16:05.192 "data_offset": 2048, 00:16:05.192 "data_size": 63488 00:16:05.192 }, 00:16:05.192 { 00:16:05.192 "name": "BaseBdev3", 00:16:05.192 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:16:05.192 "is_configured": true, 00:16:05.192 "data_offset": 2048, 00:16:05.192 "data_size": 63488 00:16:05.192 } 00:16:05.192 ] 00:16:05.192 }' 00:16:05.192 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.192 15:43:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.452 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:05.452 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.452 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:05.452 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:05.452 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.452 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.452 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.452 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.452 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.452 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.452 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.452 "name": "raid_bdev1", 00:16:05.452 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:16:05.452 "strip_size_kb": 64, 00:16:05.452 "state": "online", 00:16:05.452 "raid_level": "raid5f", 00:16:05.452 "superblock": true, 00:16:05.452 "num_base_bdevs": 3, 00:16:05.452 "num_base_bdevs_discovered": 2, 00:16:05.452 "num_base_bdevs_operational": 2, 00:16:05.452 "base_bdevs_list": [ 00:16:05.452 { 00:16:05.452 "name": null, 00:16:05.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.452 "is_configured": false, 00:16:05.452 "data_offset": 0, 00:16:05.452 "data_size": 63488 00:16:05.452 }, 00:16:05.452 { 00:16:05.452 "name": "BaseBdev2", 00:16:05.452 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:16:05.452 "is_configured": true, 00:16:05.452 "data_offset": 2048, 00:16:05.452 "data_size": 63488 00:16:05.452 }, 00:16:05.452 { 00:16:05.452 "name": "BaseBdev3", 00:16:05.452 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:16:05.452 "is_configured": true, 00:16:05.452 "data_offset": 2048, 00:16:05.452 "data_size": 63488 00:16:05.452 } 00:16:05.452 ] 00:16:05.452 }' 00:16:05.452 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.711 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:05.711 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.711 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:05.711 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:05.711 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.711 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.711 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.711 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:05.711 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.711 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.712 [2024-11-25 15:43:04.192712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:05.712 [2024-11-25 15:43:04.192767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.712 [2024-11-25 15:43:04.192791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:05.712 [2024-11-25 15:43:04.192800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.712 [2024-11-25 15:43:04.193300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.712 [2024-11-25 15:43:04.193339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:05.712 [2024-11-25 15:43:04.193418] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:05.712 [2024-11-25 15:43:04.193440] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:05.712 [2024-11-25 15:43:04.193462] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:05.712 [2024-11-25 15:43:04.193473] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:05.712 BaseBdev1 00:16:05.712 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.712 15:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:06.650 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:06.650 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.650 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.650 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.650 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.650 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.650 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.650 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.650 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.650 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.650 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.650 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.650 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.650 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.650 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.650 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.650 "name": "raid_bdev1", 00:16:06.650 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:16:06.650 "strip_size_kb": 64, 00:16:06.650 "state": "online", 00:16:06.650 "raid_level": "raid5f", 00:16:06.650 "superblock": true, 00:16:06.650 "num_base_bdevs": 3, 00:16:06.650 "num_base_bdevs_discovered": 2, 00:16:06.650 "num_base_bdevs_operational": 2, 00:16:06.650 "base_bdevs_list": [ 00:16:06.650 { 00:16:06.650 "name": null, 00:16:06.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.650 "is_configured": false, 00:16:06.650 "data_offset": 0, 00:16:06.650 "data_size": 63488 00:16:06.650 }, 00:16:06.650 { 00:16:06.650 "name": "BaseBdev2", 00:16:06.650 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:16:06.650 "is_configured": true, 00:16:06.650 "data_offset": 2048, 00:16:06.650 "data_size": 63488 00:16:06.650 }, 00:16:06.650 { 00:16:06.650 "name": "BaseBdev3", 00:16:06.650 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:16:06.650 "is_configured": true, 00:16:06.650 "data_offset": 2048, 00:16:06.650 "data_size": 63488 00:16:06.650 } 00:16:06.650 ] 00:16:06.650 }' 00:16:06.650 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.650 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.910 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.910 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.910 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.910 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.910 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.170 "name": "raid_bdev1", 00:16:07.170 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:16:07.170 "strip_size_kb": 64, 00:16:07.170 "state": "online", 00:16:07.170 "raid_level": "raid5f", 00:16:07.170 "superblock": true, 00:16:07.170 "num_base_bdevs": 3, 00:16:07.170 "num_base_bdevs_discovered": 2, 00:16:07.170 "num_base_bdevs_operational": 2, 00:16:07.170 "base_bdevs_list": [ 00:16:07.170 { 00:16:07.170 "name": null, 00:16:07.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.170 "is_configured": false, 00:16:07.170 "data_offset": 0, 00:16:07.170 "data_size": 63488 00:16:07.170 }, 00:16:07.170 { 00:16:07.170 "name": "BaseBdev2", 00:16:07.170 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:16:07.170 "is_configured": true, 00:16:07.170 "data_offset": 2048, 00:16:07.170 "data_size": 63488 00:16:07.170 }, 00:16:07.170 { 00:16:07.170 "name": "BaseBdev3", 00:16:07.170 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:16:07.170 "is_configured": true, 00:16:07.170 "data_offset": 2048, 00:16:07.170 "data_size": 63488 00:16:07.170 } 00:16:07.170 ] 00:16:07.170 }' 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.170 [2024-11-25 15:43:05.702155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:07.170 [2024-11-25 15:43:05.702312] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:07.170 [2024-11-25 15:43:05.702334] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:07.170 request: 00:16:07.170 { 00:16:07.170 "base_bdev": "BaseBdev1", 00:16:07.170 "raid_bdev": "raid_bdev1", 00:16:07.170 "method": "bdev_raid_add_base_bdev", 00:16:07.170 "req_id": 1 00:16:07.170 } 00:16:07.170 Got JSON-RPC error response 00:16:07.170 response: 00:16:07.170 { 00:16:07.170 "code": -22, 00:16:07.170 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:07.170 } 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:07.170 15:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:08.110 15:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:08.110 15:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.110 15:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.110 15:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.110 15:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.110 15:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:08.110 15:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.110 15:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.110 15:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.110 15:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.110 15:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.110 15:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.110 15:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.110 15:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.110 15:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.110 15:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.110 "name": "raid_bdev1", 00:16:08.110 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:16:08.110 "strip_size_kb": 64, 00:16:08.110 "state": "online", 00:16:08.110 "raid_level": "raid5f", 00:16:08.110 "superblock": true, 00:16:08.110 "num_base_bdevs": 3, 00:16:08.110 "num_base_bdevs_discovered": 2, 00:16:08.110 "num_base_bdevs_operational": 2, 00:16:08.110 "base_bdevs_list": [ 00:16:08.110 { 00:16:08.110 "name": null, 00:16:08.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.110 "is_configured": false, 00:16:08.110 "data_offset": 0, 00:16:08.110 "data_size": 63488 00:16:08.110 }, 00:16:08.110 { 00:16:08.110 "name": "BaseBdev2", 00:16:08.110 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:16:08.110 "is_configured": true, 00:16:08.110 "data_offset": 2048, 00:16:08.110 "data_size": 63488 00:16:08.110 }, 00:16:08.110 { 00:16:08.110 "name": "BaseBdev3", 00:16:08.110 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:16:08.110 "is_configured": true, 00:16:08.110 "data_offset": 2048, 00:16:08.110 "data_size": 63488 00:16:08.110 } 00:16:08.110 ] 00:16:08.110 }' 00:16:08.110 15:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.110 15:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.680 "name": "raid_bdev1", 00:16:08.680 "uuid": "7c0292ad-8022-4c9b-b4d1-da735e8b4c10", 00:16:08.680 "strip_size_kb": 64, 00:16:08.680 "state": "online", 00:16:08.680 "raid_level": "raid5f", 00:16:08.680 "superblock": true, 00:16:08.680 "num_base_bdevs": 3, 00:16:08.680 "num_base_bdevs_discovered": 2, 00:16:08.680 "num_base_bdevs_operational": 2, 00:16:08.680 "base_bdevs_list": [ 00:16:08.680 { 00:16:08.680 "name": null, 00:16:08.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.680 "is_configured": false, 00:16:08.680 "data_offset": 0, 00:16:08.680 "data_size": 63488 00:16:08.680 }, 00:16:08.680 { 00:16:08.680 "name": "BaseBdev2", 00:16:08.680 "uuid": "a62abecd-a12d-5ec7-994a-cb0a00b61c4d", 00:16:08.680 "is_configured": true, 00:16:08.680 "data_offset": 2048, 00:16:08.680 "data_size": 63488 00:16:08.680 }, 00:16:08.680 { 00:16:08.680 "name": "BaseBdev3", 00:16:08.680 "uuid": "8786cf05-c9da-525d-8bc1-c60c2c679054", 00:16:08.680 "is_configured": true, 00:16:08.680 "data_offset": 2048, 00:16:08.680 "data_size": 63488 00:16:08.680 } 00:16:08.680 ] 00:16:08.680 }' 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81641 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81641 ']' 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81641 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81641 00:16:08.680 killing process with pid 81641 00:16:08.680 Received shutdown signal, test time was about 60.000000 seconds 00:16:08.680 00:16:08.680 Latency(us) 00:16:08.680 [2024-11-25T15:43:07.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.680 [2024-11-25T15:43:07.361Z] =================================================================================================================== 00:16:08.680 [2024-11-25T15:43:07.361Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81641' 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81641 00:16:08.680 [2024-11-25 15:43:07.333799] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:08.680 [2024-11-25 15:43:07.333917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.680 15:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81641 00:16:08.680 [2024-11-25 15:43:07.333978] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:08.680 [2024-11-25 15:43:07.333989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:09.249 [2024-11-25 15:43:07.698680] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:10.189 15:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:10.189 00:16:10.189 real 0m22.709s 00:16:10.189 user 0m29.144s 00:16:10.189 sys 0m2.524s 00:16:10.189 15:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.189 15:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.189 ************************************ 00:16:10.189 END TEST raid5f_rebuild_test_sb 00:16:10.189 ************************************ 00:16:10.189 15:43:08 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:10.189 15:43:08 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:10.189 15:43:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:10.189 15:43:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.189 15:43:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:10.189 ************************************ 00:16:10.189 START TEST raid5f_state_function_test 00:16:10.189 ************************************ 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82384 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:10.189 Process raid pid: 82384 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82384' 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82384 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82384 ']' 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.189 15:43:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.449 [2024-11-25 15:43:08.883863] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:16:10.449 [2024-11-25 15:43:08.883971] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.449 [2024-11-25 15:43:09.051057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.713 [2024-11-25 15:43:09.158125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.713 [2024-11-25 15:43:09.355934] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.713 [2024-11-25 15:43:09.355969] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.282 15:43:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.282 15:43:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:11.282 15:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:11.282 15:43:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.282 15:43:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.282 [2024-11-25 15:43:09.687738] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:11.282 [2024-11-25 15:43:09.687785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:11.282 [2024-11-25 15:43:09.687796] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:11.282 [2024-11-25 15:43:09.687805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:11.282 [2024-11-25 15:43:09.687811] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:11.282 [2024-11-25 15:43:09.687820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:11.282 [2024-11-25 15:43:09.687826] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:11.282 [2024-11-25 15:43:09.687834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:11.282 15:43:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.282 15:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:11.282 15:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.282 15:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.282 15:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.283 15:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.283 15:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.283 15:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.283 15:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.283 15:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.283 15:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.283 15:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.283 15:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.283 15:43:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.283 15:43:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.283 15:43:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.283 15:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.283 "name": "Existed_Raid", 00:16:11.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.283 "strip_size_kb": 64, 00:16:11.283 "state": "configuring", 00:16:11.283 "raid_level": "raid5f", 00:16:11.283 "superblock": false, 00:16:11.283 "num_base_bdevs": 4, 00:16:11.283 "num_base_bdevs_discovered": 0, 00:16:11.283 "num_base_bdevs_operational": 4, 00:16:11.283 "base_bdevs_list": [ 00:16:11.283 { 00:16:11.283 "name": "BaseBdev1", 00:16:11.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.283 "is_configured": false, 00:16:11.283 "data_offset": 0, 00:16:11.283 "data_size": 0 00:16:11.283 }, 00:16:11.283 { 00:16:11.283 "name": "BaseBdev2", 00:16:11.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.283 "is_configured": false, 00:16:11.283 "data_offset": 0, 00:16:11.283 "data_size": 0 00:16:11.283 }, 00:16:11.283 { 00:16:11.283 "name": "BaseBdev3", 00:16:11.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.283 "is_configured": false, 00:16:11.283 "data_offset": 0, 00:16:11.283 "data_size": 0 00:16:11.283 }, 00:16:11.283 { 00:16:11.283 "name": "BaseBdev4", 00:16:11.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.283 "is_configured": false, 00:16:11.283 "data_offset": 0, 00:16:11.283 "data_size": 0 00:16:11.283 } 00:16:11.283 ] 00:16:11.283 }' 00:16:11.283 15:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.283 15:43:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.542 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.543 [2024-11-25 15:43:10.087010] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:11.543 [2024-11-25 15:43:10.087059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.543 [2024-11-25 15:43:10.095019] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:11.543 [2024-11-25 15:43:10.095054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:11.543 [2024-11-25 15:43:10.095062] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:11.543 [2024-11-25 15:43:10.095071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:11.543 [2024-11-25 15:43:10.095077] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:11.543 [2024-11-25 15:43:10.095085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:11.543 [2024-11-25 15:43:10.095091] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:11.543 [2024-11-25 15:43:10.095099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.543 [2024-11-25 15:43:10.133287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.543 BaseBdev1 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.543 [ 00:16:11.543 { 00:16:11.543 "name": "BaseBdev1", 00:16:11.543 "aliases": [ 00:16:11.543 "f09c5e9d-f745-432f-9b41-3d1c95822deb" 00:16:11.543 ], 00:16:11.543 "product_name": "Malloc disk", 00:16:11.543 "block_size": 512, 00:16:11.543 "num_blocks": 65536, 00:16:11.543 "uuid": "f09c5e9d-f745-432f-9b41-3d1c95822deb", 00:16:11.543 "assigned_rate_limits": { 00:16:11.543 "rw_ios_per_sec": 0, 00:16:11.543 "rw_mbytes_per_sec": 0, 00:16:11.543 "r_mbytes_per_sec": 0, 00:16:11.543 "w_mbytes_per_sec": 0 00:16:11.543 }, 00:16:11.543 "claimed": true, 00:16:11.543 "claim_type": "exclusive_write", 00:16:11.543 "zoned": false, 00:16:11.543 "supported_io_types": { 00:16:11.543 "read": true, 00:16:11.543 "write": true, 00:16:11.543 "unmap": true, 00:16:11.543 "flush": true, 00:16:11.543 "reset": true, 00:16:11.543 "nvme_admin": false, 00:16:11.543 "nvme_io": false, 00:16:11.543 "nvme_io_md": false, 00:16:11.543 "write_zeroes": true, 00:16:11.543 "zcopy": true, 00:16:11.543 "get_zone_info": false, 00:16:11.543 "zone_management": false, 00:16:11.543 "zone_append": false, 00:16:11.543 "compare": false, 00:16:11.543 "compare_and_write": false, 00:16:11.543 "abort": true, 00:16:11.543 "seek_hole": false, 00:16:11.543 "seek_data": false, 00:16:11.543 "copy": true, 00:16:11.543 "nvme_iov_md": false 00:16:11.543 }, 00:16:11.543 "memory_domains": [ 00:16:11.543 { 00:16:11.543 "dma_device_id": "system", 00:16:11.543 "dma_device_type": 1 00:16:11.543 }, 00:16:11.543 { 00:16:11.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.543 "dma_device_type": 2 00:16:11.543 } 00:16:11.543 ], 00:16:11.543 "driver_specific": {} 00:16:11.543 } 00:16:11.543 ] 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.543 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.543 "name": "Existed_Raid", 00:16:11.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.543 "strip_size_kb": 64, 00:16:11.543 "state": "configuring", 00:16:11.544 "raid_level": "raid5f", 00:16:11.544 "superblock": false, 00:16:11.544 "num_base_bdevs": 4, 00:16:11.544 "num_base_bdevs_discovered": 1, 00:16:11.544 "num_base_bdevs_operational": 4, 00:16:11.544 "base_bdevs_list": [ 00:16:11.544 { 00:16:11.544 "name": "BaseBdev1", 00:16:11.544 "uuid": "f09c5e9d-f745-432f-9b41-3d1c95822deb", 00:16:11.544 "is_configured": true, 00:16:11.544 "data_offset": 0, 00:16:11.544 "data_size": 65536 00:16:11.544 }, 00:16:11.544 { 00:16:11.544 "name": "BaseBdev2", 00:16:11.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.544 "is_configured": false, 00:16:11.544 "data_offset": 0, 00:16:11.544 "data_size": 0 00:16:11.544 }, 00:16:11.544 { 00:16:11.544 "name": "BaseBdev3", 00:16:11.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.544 "is_configured": false, 00:16:11.544 "data_offset": 0, 00:16:11.544 "data_size": 0 00:16:11.544 }, 00:16:11.544 { 00:16:11.544 "name": "BaseBdev4", 00:16:11.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.544 "is_configured": false, 00:16:11.544 "data_offset": 0, 00:16:11.544 "data_size": 0 00:16:11.544 } 00:16:11.544 ] 00:16:11.544 }' 00:16:11.544 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.544 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.114 [2024-11-25 15:43:10.620480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:12.114 [2024-11-25 15:43:10.620530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.114 [2024-11-25 15:43:10.632518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:12.114 [2024-11-25 15:43:10.634281] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:12.114 [2024-11-25 15:43:10.634319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:12.114 [2024-11-25 15:43:10.634329] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:12.114 [2024-11-25 15:43:10.634338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:12.114 [2024-11-25 15:43:10.634344] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:12.114 [2024-11-25 15:43:10.634352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.114 "name": "Existed_Raid", 00:16:12.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.114 "strip_size_kb": 64, 00:16:12.114 "state": "configuring", 00:16:12.114 "raid_level": "raid5f", 00:16:12.114 "superblock": false, 00:16:12.114 "num_base_bdevs": 4, 00:16:12.114 "num_base_bdevs_discovered": 1, 00:16:12.114 "num_base_bdevs_operational": 4, 00:16:12.114 "base_bdevs_list": [ 00:16:12.114 { 00:16:12.114 "name": "BaseBdev1", 00:16:12.114 "uuid": "f09c5e9d-f745-432f-9b41-3d1c95822deb", 00:16:12.114 "is_configured": true, 00:16:12.114 "data_offset": 0, 00:16:12.114 "data_size": 65536 00:16:12.114 }, 00:16:12.114 { 00:16:12.114 "name": "BaseBdev2", 00:16:12.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.114 "is_configured": false, 00:16:12.114 "data_offset": 0, 00:16:12.114 "data_size": 0 00:16:12.114 }, 00:16:12.114 { 00:16:12.114 "name": "BaseBdev3", 00:16:12.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.114 "is_configured": false, 00:16:12.114 "data_offset": 0, 00:16:12.114 "data_size": 0 00:16:12.114 }, 00:16:12.114 { 00:16:12.114 "name": "BaseBdev4", 00:16:12.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.114 "is_configured": false, 00:16:12.114 "data_offset": 0, 00:16:12.114 "data_size": 0 00:16:12.114 } 00:16:12.114 ] 00:16:12.114 }' 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.114 15:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.683 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.684 [2024-11-25 15:43:11.100415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:12.684 BaseBdev2 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.684 [ 00:16:12.684 { 00:16:12.684 "name": "BaseBdev2", 00:16:12.684 "aliases": [ 00:16:12.684 "b6465f7f-53fe-4e09-bf91-b207338feb30" 00:16:12.684 ], 00:16:12.684 "product_name": "Malloc disk", 00:16:12.684 "block_size": 512, 00:16:12.684 "num_blocks": 65536, 00:16:12.684 "uuid": "b6465f7f-53fe-4e09-bf91-b207338feb30", 00:16:12.684 "assigned_rate_limits": { 00:16:12.684 "rw_ios_per_sec": 0, 00:16:12.684 "rw_mbytes_per_sec": 0, 00:16:12.684 "r_mbytes_per_sec": 0, 00:16:12.684 "w_mbytes_per_sec": 0 00:16:12.684 }, 00:16:12.684 "claimed": true, 00:16:12.684 "claim_type": "exclusive_write", 00:16:12.684 "zoned": false, 00:16:12.684 "supported_io_types": { 00:16:12.684 "read": true, 00:16:12.684 "write": true, 00:16:12.684 "unmap": true, 00:16:12.684 "flush": true, 00:16:12.684 "reset": true, 00:16:12.684 "nvme_admin": false, 00:16:12.684 "nvme_io": false, 00:16:12.684 "nvme_io_md": false, 00:16:12.684 "write_zeroes": true, 00:16:12.684 "zcopy": true, 00:16:12.684 "get_zone_info": false, 00:16:12.684 "zone_management": false, 00:16:12.684 "zone_append": false, 00:16:12.684 "compare": false, 00:16:12.684 "compare_and_write": false, 00:16:12.684 "abort": true, 00:16:12.684 "seek_hole": false, 00:16:12.684 "seek_data": false, 00:16:12.684 "copy": true, 00:16:12.684 "nvme_iov_md": false 00:16:12.684 }, 00:16:12.684 "memory_domains": [ 00:16:12.684 { 00:16:12.684 "dma_device_id": "system", 00:16:12.684 "dma_device_type": 1 00:16:12.684 }, 00:16:12.684 { 00:16:12.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.684 "dma_device_type": 2 00:16:12.684 } 00:16:12.684 ], 00:16:12.684 "driver_specific": {} 00:16:12.684 } 00:16:12.684 ] 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.684 "name": "Existed_Raid", 00:16:12.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.684 "strip_size_kb": 64, 00:16:12.684 "state": "configuring", 00:16:12.684 "raid_level": "raid5f", 00:16:12.684 "superblock": false, 00:16:12.684 "num_base_bdevs": 4, 00:16:12.684 "num_base_bdevs_discovered": 2, 00:16:12.684 "num_base_bdevs_operational": 4, 00:16:12.684 "base_bdevs_list": [ 00:16:12.684 { 00:16:12.684 "name": "BaseBdev1", 00:16:12.684 "uuid": "f09c5e9d-f745-432f-9b41-3d1c95822deb", 00:16:12.684 "is_configured": true, 00:16:12.684 "data_offset": 0, 00:16:12.684 "data_size": 65536 00:16:12.684 }, 00:16:12.684 { 00:16:12.684 "name": "BaseBdev2", 00:16:12.684 "uuid": "b6465f7f-53fe-4e09-bf91-b207338feb30", 00:16:12.684 "is_configured": true, 00:16:12.684 "data_offset": 0, 00:16:12.684 "data_size": 65536 00:16:12.684 }, 00:16:12.684 { 00:16:12.684 "name": "BaseBdev3", 00:16:12.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.684 "is_configured": false, 00:16:12.684 "data_offset": 0, 00:16:12.684 "data_size": 0 00:16:12.684 }, 00:16:12.684 { 00:16:12.684 "name": "BaseBdev4", 00:16:12.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.684 "is_configured": false, 00:16:12.684 "data_offset": 0, 00:16:12.684 "data_size": 0 00:16:12.684 } 00:16:12.684 ] 00:16:12.684 }' 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.684 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.945 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:12.945 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.945 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.945 [2024-11-25 15:43:11.614324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:12.945 BaseBdev3 00:16:12.945 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.945 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:12.945 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:12.945 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:12.945 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:12.945 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:12.945 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:12.945 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:12.945 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.945 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.205 [ 00:16:13.205 { 00:16:13.205 "name": "BaseBdev3", 00:16:13.205 "aliases": [ 00:16:13.205 "7ba76b7d-fd97-46f1-ba0e-ded077b0b505" 00:16:13.205 ], 00:16:13.205 "product_name": "Malloc disk", 00:16:13.205 "block_size": 512, 00:16:13.205 "num_blocks": 65536, 00:16:13.205 "uuid": "7ba76b7d-fd97-46f1-ba0e-ded077b0b505", 00:16:13.205 "assigned_rate_limits": { 00:16:13.205 "rw_ios_per_sec": 0, 00:16:13.205 "rw_mbytes_per_sec": 0, 00:16:13.205 "r_mbytes_per_sec": 0, 00:16:13.205 "w_mbytes_per_sec": 0 00:16:13.205 }, 00:16:13.205 "claimed": true, 00:16:13.205 "claim_type": "exclusive_write", 00:16:13.205 "zoned": false, 00:16:13.205 "supported_io_types": { 00:16:13.205 "read": true, 00:16:13.205 "write": true, 00:16:13.205 "unmap": true, 00:16:13.205 "flush": true, 00:16:13.205 "reset": true, 00:16:13.205 "nvme_admin": false, 00:16:13.205 "nvme_io": false, 00:16:13.205 "nvme_io_md": false, 00:16:13.205 "write_zeroes": true, 00:16:13.205 "zcopy": true, 00:16:13.205 "get_zone_info": false, 00:16:13.205 "zone_management": false, 00:16:13.205 "zone_append": false, 00:16:13.205 "compare": false, 00:16:13.205 "compare_and_write": false, 00:16:13.205 "abort": true, 00:16:13.205 "seek_hole": false, 00:16:13.205 "seek_data": false, 00:16:13.205 "copy": true, 00:16:13.205 "nvme_iov_md": false 00:16:13.205 }, 00:16:13.205 "memory_domains": [ 00:16:13.205 { 00:16:13.205 "dma_device_id": "system", 00:16:13.205 "dma_device_type": 1 00:16:13.205 }, 00:16:13.205 { 00:16:13.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.205 "dma_device_type": 2 00:16:13.205 } 00:16:13.205 ], 00:16:13.205 "driver_specific": {} 00:16:13.205 } 00:16:13.205 ] 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.205 "name": "Existed_Raid", 00:16:13.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.205 "strip_size_kb": 64, 00:16:13.205 "state": "configuring", 00:16:13.205 "raid_level": "raid5f", 00:16:13.205 "superblock": false, 00:16:13.205 "num_base_bdevs": 4, 00:16:13.205 "num_base_bdevs_discovered": 3, 00:16:13.205 "num_base_bdevs_operational": 4, 00:16:13.205 "base_bdevs_list": [ 00:16:13.205 { 00:16:13.205 "name": "BaseBdev1", 00:16:13.205 "uuid": "f09c5e9d-f745-432f-9b41-3d1c95822deb", 00:16:13.205 "is_configured": true, 00:16:13.205 "data_offset": 0, 00:16:13.205 "data_size": 65536 00:16:13.205 }, 00:16:13.205 { 00:16:13.205 "name": "BaseBdev2", 00:16:13.205 "uuid": "b6465f7f-53fe-4e09-bf91-b207338feb30", 00:16:13.205 "is_configured": true, 00:16:13.205 "data_offset": 0, 00:16:13.205 "data_size": 65536 00:16:13.205 }, 00:16:13.205 { 00:16:13.205 "name": "BaseBdev3", 00:16:13.205 "uuid": "7ba76b7d-fd97-46f1-ba0e-ded077b0b505", 00:16:13.205 "is_configured": true, 00:16:13.205 "data_offset": 0, 00:16:13.205 "data_size": 65536 00:16:13.205 }, 00:16:13.205 { 00:16:13.205 "name": "BaseBdev4", 00:16:13.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.205 "is_configured": false, 00:16:13.205 "data_offset": 0, 00:16:13.205 "data_size": 0 00:16:13.205 } 00:16:13.205 ] 00:16:13.205 }' 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.205 15:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.465 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:13.465 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.465 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.465 [2024-11-25 15:43:12.127385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:13.465 [2024-11-25 15:43:12.127448] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:13.465 [2024-11-25 15:43:12.127457] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:13.465 [2024-11-25 15:43:12.127707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:13.465 [2024-11-25 15:43:12.134497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:13.465 [2024-11-25 15:43:12.134522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:13.465 [2024-11-25 15:43:12.134784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.465 BaseBdev4 00:16:13.465 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.465 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:13.465 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:13.465 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:13.465 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:13.465 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:13.465 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:13.465 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:13.465 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.465 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.725 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.725 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:13.725 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.725 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.725 [ 00:16:13.725 { 00:16:13.725 "name": "BaseBdev4", 00:16:13.725 "aliases": [ 00:16:13.725 "140c0cca-291b-416a-a134-306ddf8be766" 00:16:13.725 ], 00:16:13.725 "product_name": "Malloc disk", 00:16:13.725 "block_size": 512, 00:16:13.725 "num_blocks": 65536, 00:16:13.725 "uuid": "140c0cca-291b-416a-a134-306ddf8be766", 00:16:13.725 "assigned_rate_limits": { 00:16:13.725 "rw_ios_per_sec": 0, 00:16:13.725 "rw_mbytes_per_sec": 0, 00:16:13.725 "r_mbytes_per_sec": 0, 00:16:13.725 "w_mbytes_per_sec": 0 00:16:13.725 }, 00:16:13.725 "claimed": true, 00:16:13.725 "claim_type": "exclusive_write", 00:16:13.725 "zoned": false, 00:16:13.725 "supported_io_types": { 00:16:13.725 "read": true, 00:16:13.725 "write": true, 00:16:13.725 "unmap": true, 00:16:13.725 "flush": true, 00:16:13.725 "reset": true, 00:16:13.725 "nvme_admin": false, 00:16:13.725 "nvme_io": false, 00:16:13.725 "nvme_io_md": false, 00:16:13.725 "write_zeroes": true, 00:16:13.725 "zcopy": true, 00:16:13.725 "get_zone_info": false, 00:16:13.725 "zone_management": false, 00:16:13.725 "zone_append": false, 00:16:13.725 "compare": false, 00:16:13.725 "compare_and_write": false, 00:16:13.725 "abort": true, 00:16:13.725 "seek_hole": false, 00:16:13.725 "seek_data": false, 00:16:13.725 "copy": true, 00:16:13.725 "nvme_iov_md": false 00:16:13.725 }, 00:16:13.725 "memory_domains": [ 00:16:13.725 { 00:16:13.725 "dma_device_id": "system", 00:16:13.725 "dma_device_type": 1 00:16:13.725 }, 00:16:13.725 { 00:16:13.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.726 "dma_device_type": 2 00:16:13.726 } 00:16:13.726 ], 00:16:13.726 "driver_specific": {} 00:16:13.726 } 00:16:13.726 ] 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.726 "name": "Existed_Raid", 00:16:13.726 "uuid": "f7b4e59e-637b-43e0-902d-f99838797688", 00:16:13.726 "strip_size_kb": 64, 00:16:13.726 "state": "online", 00:16:13.726 "raid_level": "raid5f", 00:16:13.726 "superblock": false, 00:16:13.726 "num_base_bdevs": 4, 00:16:13.726 "num_base_bdevs_discovered": 4, 00:16:13.726 "num_base_bdevs_operational": 4, 00:16:13.726 "base_bdevs_list": [ 00:16:13.726 { 00:16:13.726 "name": "BaseBdev1", 00:16:13.726 "uuid": "f09c5e9d-f745-432f-9b41-3d1c95822deb", 00:16:13.726 "is_configured": true, 00:16:13.726 "data_offset": 0, 00:16:13.726 "data_size": 65536 00:16:13.726 }, 00:16:13.726 { 00:16:13.726 "name": "BaseBdev2", 00:16:13.726 "uuid": "b6465f7f-53fe-4e09-bf91-b207338feb30", 00:16:13.726 "is_configured": true, 00:16:13.726 "data_offset": 0, 00:16:13.726 "data_size": 65536 00:16:13.726 }, 00:16:13.726 { 00:16:13.726 "name": "BaseBdev3", 00:16:13.726 "uuid": "7ba76b7d-fd97-46f1-ba0e-ded077b0b505", 00:16:13.726 "is_configured": true, 00:16:13.726 "data_offset": 0, 00:16:13.726 "data_size": 65536 00:16:13.726 }, 00:16:13.726 { 00:16:13.726 "name": "BaseBdev4", 00:16:13.726 "uuid": "140c0cca-291b-416a-a134-306ddf8be766", 00:16:13.726 "is_configured": true, 00:16:13.726 "data_offset": 0, 00:16:13.726 "data_size": 65536 00:16:13.726 } 00:16:13.726 ] 00:16:13.726 }' 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.726 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.986 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:13.986 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:13.986 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:13.986 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:13.986 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:13.986 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:13.986 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:13.986 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.986 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:13.986 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.986 [2024-11-25 15:43:12.574281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:13.986 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.986 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:13.986 "name": "Existed_Raid", 00:16:13.986 "aliases": [ 00:16:13.986 "f7b4e59e-637b-43e0-902d-f99838797688" 00:16:13.986 ], 00:16:13.986 "product_name": "Raid Volume", 00:16:13.986 "block_size": 512, 00:16:13.986 "num_blocks": 196608, 00:16:13.986 "uuid": "f7b4e59e-637b-43e0-902d-f99838797688", 00:16:13.986 "assigned_rate_limits": { 00:16:13.986 "rw_ios_per_sec": 0, 00:16:13.986 "rw_mbytes_per_sec": 0, 00:16:13.986 "r_mbytes_per_sec": 0, 00:16:13.986 "w_mbytes_per_sec": 0 00:16:13.986 }, 00:16:13.986 "claimed": false, 00:16:13.986 "zoned": false, 00:16:13.986 "supported_io_types": { 00:16:13.986 "read": true, 00:16:13.986 "write": true, 00:16:13.986 "unmap": false, 00:16:13.986 "flush": false, 00:16:13.986 "reset": true, 00:16:13.986 "nvme_admin": false, 00:16:13.986 "nvme_io": false, 00:16:13.986 "nvme_io_md": false, 00:16:13.986 "write_zeroes": true, 00:16:13.986 "zcopy": false, 00:16:13.986 "get_zone_info": false, 00:16:13.986 "zone_management": false, 00:16:13.986 "zone_append": false, 00:16:13.986 "compare": false, 00:16:13.986 "compare_and_write": false, 00:16:13.986 "abort": false, 00:16:13.986 "seek_hole": false, 00:16:13.986 "seek_data": false, 00:16:13.986 "copy": false, 00:16:13.986 "nvme_iov_md": false 00:16:13.986 }, 00:16:13.986 "driver_specific": { 00:16:13.986 "raid": { 00:16:13.986 "uuid": "f7b4e59e-637b-43e0-902d-f99838797688", 00:16:13.986 "strip_size_kb": 64, 00:16:13.986 "state": "online", 00:16:13.986 "raid_level": "raid5f", 00:16:13.986 "superblock": false, 00:16:13.986 "num_base_bdevs": 4, 00:16:13.986 "num_base_bdevs_discovered": 4, 00:16:13.986 "num_base_bdevs_operational": 4, 00:16:13.986 "base_bdevs_list": [ 00:16:13.986 { 00:16:13.986 "name": "BaseBdev1", 00:16:13.986 "uuid": "f09c5e9d-f745-432f-9b41-3d1c95822deb", 00:16:13.986 "is_configured": true, 00:16:13.986 "data_offset": 0, 00:16:13.986 "data_size": 65536 00:16:13.986 }, 00:16:13.986 { 00:16:13.986 "name": "BaseBdev2", 00:16:13.986 "uuid": "b6465f7f-53fe-4e09-bf91-b207338feb30", 00:16:13.986 "is_configured": true, 00:16:13.986 "data_offset": 0, 00:16:13.986 "data_size": 65536 00:16:13.986 }, 00:16:13.986 { 00:16:13.986 "name": "BaseBdev3", 00:16:13.986 "uuid": "7ba76b7d-fd97-46f1-ba0e-ded077b0b505", 00:16:13.986 "is_configured": true, 00:16:13.986 "data_offset": 0, 00:16:13.986 "data_size": 65536 00:16:13.986 }, 00:16:13.986 { 00:16:13.986 "name": "BaseBdev4", 00:16:13.986 "uuid": "140c0cca-291b-416a-a134-306ddf8be766", 00:16:13.986 "is_configured": true, 00:16:13.986 "data_offset": 0, 00:16:13.986 "data_size": 65536 00:16:13.986 } 00:16:13.986 ] 00:16:13.986 } 00:16:13.986 } 00:16:13.986 }' 00:16:13.986 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:13.986 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:13.986 BaseBdev2 00:16:13.986 BaseBdev3 00:16:13.986 BaseBdev4' 00:16:13.986 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.246 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.246 [2024-11-25 15:43:12.897565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.506 15:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.506 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.506 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.506 "name": "Existed_Raid", 00:16:14.506 "uuid": "f7b4e59e-637b-43e0-902d-f99838797688", 00:16:14.506 "strip_size_kb": 64, 00:16:14.506 "state": "online", 00:16:14.506 "raid_level": "raid5f", 00:16:14.506 "superblock": false, 00:16:14.506 "num_base_bdevs": 4, 00:16:14.506 "num_base_bdevs_discovered": 3, 00:16:14.506 "num_base_bdevs_operational": 3, 00:16:14.506 "base_bdevs_list": [ 00:16:14.506 { 00:16:14.506 "name": null, 00:16:14.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.506 "is_configured": false, 00:16:14.506 "data_offset": 0, 00:16:14.506 "data_size": 65536 00:16:14.506 }, 00:16:14.506 { 00:16:14.506 "name": "BaseBdev2", 00:16:14.506 "uuid": "b6465f7f-53fe-4e09-bf91-b207338feb30", 00:16:14.506 "is_configured": true, 00:16:14.506 "data_offset": 0, 00:16:14.506 "data_size": 65536 00:16:14.506 }, 00:16:14.506 { 00:16:14.506 "name": "BaseBdev3", 00:16:14.506 "uuid": "7ba76b7d-fd97-46f1-ba0e-ded077b0b505", 00:16:14.506 "is_configured": true, 00:16:14.506 "data_offset": 0, 00:16:14.506 "data_size": 65536 00:16:14.506 }, 00:16:14.506 { 00:16:14.506 "name": "BaseBdev4", 00:16:14.506 "uuid": "140c0cca-291b-416a-a134-306ddf8be766", 00:16:14.506 "is_configured": true, 00:16:14.506 "data_offset": 0, 00:16:14.506 "data_size": 65536 00:16:14.506 } 00:16:14.506 ] 00:16:14.506 }' 00:16:14.506 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.506 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.766 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:14.766 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:14.766 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.766 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.766 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.766 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:15.026 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.026 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:15.026 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:15.026 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:15.026 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.026 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.026 [2024-11-25 15:43:13.491554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:15.026 [2024-11-25 15:43:13.491658] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:15.026 [2024-11-25 15:43:13.580284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.026 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.026 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:15.026 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:15.026 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.026 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.026 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.026 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:15.026 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.026 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:15.026 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:15.026 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:15.026 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.026 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.026 [2024-11-25 15:43:13.636196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.286 [2024-11-25 15:43:13.764696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:15.286 [2024-11-25 15:43:13.764750] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.286 BaseBdev2 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.286 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.547 [ 00:16:15.547 { 00:16:15.547 "name": "BaseBdev2", 00:16:15.547 "aliases": [ 00:16:15.547 "c149f2b0-0ebc-43be-b9a6-9c42d160075c" 00:16:15.547 ], 00:16:15.547 "product_name": "Malloc disk", 00:16:15.547 "block_size": 512, 00:16:15.547 "num_blocks": 65536, 00:16:15.547 "uuid": "c149f2b0-0ebc-43be-b9a6-9c42d160075c", 00:16:15.547 "assigned_rate_limits": { 00:16:15.547 "rw_ios_per_sec": 0, 00:16:15.547 "rw_mbytes_per_sec": 0, 00:16:15.547 "r_mbytes_per_sec": 0, 00:16:15.547 "w_mbytes_per_sec": 0 00:16:15.547 }, 00:16:15.547 "claimed": false, 00:16:15.547 "zoned": false, 00:16:15.547 "supported_io_types": { 00:16:15.547 "read": true, 00:16:15.547 "write": true, 00:16:15.547 "unmap": true, 00:16:15.547 "flush": true, 00:16:15.547 "reset": true, 00:16:15.547 "nvme_admin": false, 00:16:15.547 "nvme_io": false, 00:16:15.547 "nvme_io_md": false, 00:16:15.547 "write_zeroes": true, 00:16:15.547 "zcopy": true, 00:16:15.547 "get_zone_info": false, 00:16:15.547 "zone_management": false, 00:16:15.547 "zone_append": false, 00:16:15.547 "compare": false, 00:16:15.547 "compare_and_write": false, 00:16:15.547 "abort": true, 00:16:15.547 "seek_hole": false, 00:16:15.547 "seek_data": false, 00:16:15.547 "copy": true, 00:16:15.547 "nvme_iov_md": false 00:16:15.547 }, 00:16:15.547 "memory_domains": [ 00:16:15.547 { 00:16:15.547 "dma_device_id": "system", 00:16:15.547 "dma_device_type": 1 00:16:15.547 }, 00:16:15.547 { 00:16:15.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.547 "dma_device_type": 2 00:16:15.547 } 00:16:15.547 ], 00:16:15.547 "driver_specific": {} 00:16:15.547 } 00:16:15.547 ] 00:16:15.547 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.547 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:15.547 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:15.547 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:15.547 15:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:15.547 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.547 15:43:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.547 BaseBdev3 00:16:15.547 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.547 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:15.547 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:15.547 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:15.547 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:15.547 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:15.547 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:15.547 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:15.547 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.547 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.547 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.547 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:15.547 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.547 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.547 [ 00:16:15.547 { 00:16:15.547 "name": "BaseBdev3", 00:16:15.547 "aliases": [ 00:16:15.547 "81c0d3b4-31f2-45bd-aff3-ce168b760b50" 00:16:15.547 ], 00:16:15.547 "product_name": "Malloc disk", 00:16:15.547 "block_size": 512, 00:16:15.547 "num_blocks": 65536, 00:16:15.547 "uuid": "81c0d3b4-31f2-45bd-aff3-ce168b760b50", 00:16:15.547 "assigned_rate_limits": { 00:16:15.547 "rw_ios_per_sec": 0, 00:16:15.547 "rw_mbytes_per_sec": 0, 00:16:15.547 "r_mbytes_per_sec": 0, 00:16:15.547 "w_mbytes_per_sec": 0 00:16:15.547 }, 00:16:15.547 "claimed": false, 00:16:15.547 "zoned": false, 00:16:15.547 "supported_io_types": { 00:16:15.547 "read": true, 00:16:15.547 "write": true, 00:16:15.547 "unmap": true, 00:16:15.548 "flush": true, 00:16:15.548 "reset": true, 00:16:15.548 "nvme_admin": false, 00:16:15.548 "nvme_io": false, 00:16:15.548 "nvme_io_md": false, 00:16:15.548 "write_zeroes": true, 00:16:15.548 "zcopy": true, 00:16:15.548 "get_zone_info": false, 00:16:15.548 "zone_management": false, 00:16:15.548 "zone_append": false, 00:16:15.548 "compare": false, 00:16:15.548 "compare_and_write": false, 00:16:15.548 "abort": true, 00:16:15.548 "seek_hole": false, 00:16:15.548 "seek_data": false, 00:16:15.548 "copy": true, 00:16:15.548 "nvme_iov_md": false 00:16:15.548 }, 00:16:15.548 "memory_domains": [ 00:16:15.548 { 00:16:15.548 "dma_device_id": "system", 00:16:15.548 "dma_device_type": 1 00:16:15.548 }, 00:16:15.548 { 00:16:15.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.548 "dma_device_type": 2 00:16:15.548 } 00:16:15.548 ], 00:16:15.548 "driver_specific": {} 00:16:15.548 } 00:16:15.548 ] 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.548 BaseBdev4 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.548 [ 00:16:15.548 { 00:16:15.548 "name": "BaseBdev4", 00:16:15.548 "aliases": [ 00:16:15.548 "b6fcfc6e-3b50-4649-8816-905e17428ae3" 00:16:15.548 ], 00:16:15.548 "product_name": "Malloc disk", 00:16:15.548 "block_size": 512, 00:16:15.548 "num_blocks": 65536, 00:16:15.548 "uuid": "b6fcfc6e-3b50-4649-8816-905e17428ae3", 00:16:15.548 "assigned_rate_limits": { 00:16:15.548 "rw_ios_per_sec": 0, 00:16:15.548 "rw_mbytes_per_sec": 0, 00:16:15.548 "r_mbytes_per_sec": 0, 00:16:15.548 "w_mbytes_per_sec": 0 00:16:15.548 }, 00:16:15.548 "claimed": false, 00:16:15.548 "zoned": false, 00:16:15.548 "supported_io_types": { 00:16:15.548 "read": true, 00:16:15.548 "write": true, 00:16:15.548 "unmap": true, 00:16:15.548 "flush": true, 00:16:15.548 "reset": true, 00:16:15.548 "nvme_admin": false, 00:16:15.548 "nvme_io": false, 00:16:15.548 "nvme_io_md": false, 00:16:15.548 "write_zeroes": true, 00:16:15.548 "zcopy": true, 00:16:15.548 "get_zone_info": false, 00:16:15.548 "zone_management": false, 00:16:15.548 "zone_append": false, 00:16:15.548 "compare": false, 00:16:15.548 "compare_and_write": false, 00:16:15.548 "abort": true, 00:16:15.548 "seek_hole": false, 00:16:15.548 "seek_data": false, 00:16:15.548 "copy": true, 00:16:15.548 "nvme_iov_md": false 00:16:15.548 }, 00:16:15.548 "memory_domains": [ 00:16:15.548 { 00:16:15.548 "dma_device_id": "system", 00:16:15.548 "dma_device_type": 1 00:16:15.548 }, 00:16:15.548 { 00:16:15.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.548 "dma_device_type": 2 00:16:15.548 } 00:16:15.548 ], 00:16:15.548 "driver_specific": {} 00:16:15.548 } 00:16:15.548 ] 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.548 [2024-11-25 15:43:14.135796] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:15.548 [2024-11-25 15:43:14.135836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:15.548 [2024-11-25 15:43:14.135856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:15.548 [2024-11-25 15:43:14.137553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:15.548 [2024-11-25 15:43:14.137606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.548 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.548 "name": "Existed_Raid", 00:16:15.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.548 "strip_size_kb": 64, 00:16:15.548 "state": "configuring", 00:16:15.548 "raid_level": "raid5f", 00:16:15.548 "superblock": false, 00:16:15.548 "num_base_bdevs": 4, 00:16:15.548 "num_base_bdevs_discovered": 3, 00:16:15.548 "num_base_bdevs_operational": 4, 00:16:15.548 "base_bdevs_list": [ 00:16:15.548 { 00:16:15.548 "name": "BaseBdev1", 00:16:15.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.548 "is_configured": false, 00:16:15.549 "data_offset": 0, 00:16:15.549 "data_size": 0 00:16:15.549 }, 00:16:15.549 { 00:16:15.549 "name": "BaseBdev2", 00:16:15.549 "uuid": "c149f2b0-0ebc-43be-b9a6-9c42d160075c", 00:16:15.549 "is_configured": true, 00:16:15.549 "data_offset": 0, 00:16:15.549 "data_size": 65536 00:16:15.549 }, 00:16:15.549 { 00:16:15.549 "name": "BaseBdev3", 00:16:15.549 "uuid": "81c0d3b4-31f2-45bd-aff3-ce168b760b50", 00:16:15.549 "is_configured": true, 00:16:15.549 "data_offset": 0, 00:16:15.549 "data_size": 65536 00:16:15.549 }, 00:16:15.549 { 00:16:15.549 "name": "BaseBdev4", 00:16:15.549 "uuid": "b6fcfc6e-3b50-4649-8816-905e17428ae3", 00:16:15.549 "is_configured": true, 00:16:15.549 "data_offset": 0, 00:16:15.549 "data_size": 65536 00:16:15.549 } 00:16:15.549 ] 00:16:15.549 }' 00:16:15.549 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.549 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.119 [2024-11-25 15:43:14.511203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.119 "name": "Existed_Raid", 00:16:16.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.119 "strip_size_kb": 64, 00:16:16.119 "state": "configuring", 00:16:16.119 "raid_level": "raid5f", 00:16:16.119 "superblock": false, 00:16:16.119 "num_base_bdevs": 4, 00:16:16.119 "num_base_bdevs_discovered": 2, 00:16:16.119 "num_base_bdevs_operational": 4, 00:16:16.119 "base_bdevs_list": [ 00:16:16.119 { 00:16:16.119 "name": "BaseBdev1", 00:16:16.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.119 "is_configured": false, 00:16:16.119 "data_offset": 0, 00:16:16.119 "data_size": 0 00:16:16.119 }, 00:16:16.119 { 00:16:16.119 "name": null, 00:16:16.119 "uuid": "c149f2b0-0ebc-43be-b9a6-9c42d160075c", 00:16:16.119 "is_configured": false, 00:16:16.119 "data_offset": 0, 00:16:16.119 "data_size": 65536 00:16:16.119 }, 00:16:16.119 { 00:16:16.119 "name": "BaseBdev3", 00:16:16.119 "uuid": "81c0d3b4-31f2-45bd-aff3-ce168b760b50", 00:16:16.119 "is_configured": true, 00:16:16.119 "data_offset": 0, 00:16:16.119 "data_size": 65536 00:16:16.119 }, 00:16:16.119 { 00:16:16.119 "name": "BaseBdev4", 00:16:16.119 "uuid": "b6fcfc6e-3b50-4649-8816-905e17428ae3", 00:16:16.119 "is_configured": true, 00:16:16.119 "data_offset": 0, 00:16:16.119 "data_size": 65536 00:16:16.119 } 00:16:16.119 ] 00:16:16.119 }' 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.119 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.378 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:16.378 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.379 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.379 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.379 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.379 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:16.379 15:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:16.379 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.379 15:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.379 [2024-11-25 15:43:15.026550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.379 BaseBdev1 00:16:16.379 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.379 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:16.379 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:16.379 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:16.379 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:16.379 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:16.379 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:16.379 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:16.379 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.379 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.379 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.379 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:16.379 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.379 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.379 [ 00:16:16.379 { 00:16:16.379 "name": "BaseBdev1", 00:16:16.379 "aliases": [ 00:16:16.379 "c512b137-1b52-463f-9f5c-3bcb5852408e" 00:16:16.379 ], 00:16:16.379 "product_name": "Malloc disk", 00:16:16.379 "block_size": 512, 00:16:16.379 "num_blocks": 65536, 00:16:16.379 "uuid": "c512b137-1b52-463f-9f5c-3bcb5852408e", 00:16:16.379 "assigned_rate_limits": { 00:16:16.379 "rw_ios_per_sec": 0, 00:16:16.379 "rw_mbytes_per_sec": 0, 00:16:16.379 "r_mbytes_per_sec": 0, 00:16:16.379 "w_mbytes_per_sec": 0 00:16:16.379 }, 00:16:16.379 "claimed": true, 00:16:16.379 "claim_type": "exclusive_write", 00:16:16.379 "zoned": false, 00:16:16.379 "supported_io_types": { 00:16:16.379 "read": true, 00:16:16.379 "write": true, 00:16:16.379 "unmap": true, 00:16:16.379 "flush": true, 00:16:16.379 "reset": true, 00:16:16.379 "nvme_admin": false, 00:16:16.379 "nvme_io": false, 00:16:16.379 "nvme_io_md": false, 00:16:16.379 "write_zeroes": true, 00:16:16.379 "zcopy": true, 00:16:16.379 "get_zone_info": false, 00:16:16.379 "zone_management": false, 00:16:16.379 "zone_append": false, 00:16:16.379 "compare": false, 00:16:16.379 "compare_and_write": false, 00:16:16.379 "abort": true, 00:16:16.379 "seek_hole": false, 00:16:16.379 "seek_data": false, 00:16:16.638 "copy": true, 00:16:16.638 "nvme_iov_md": false 00:16:16.638 }, 00:16:16.638 "memory_domains": [ 00:16:16.638 { 00:16:16.638 "dma_device_id": "system", 00:16:16.638 "dma_device_type": 1 00:16:16.638 }, 00:16:16.638 { 00:16:16.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.638 "dma_device_type": 2 00:16:16.638 } 00:16:16.638 ], 00:16:16.638 "driver_specific": {} 00:16:16.638 } 00:16:16.638 ] 00:16:16.638 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.638 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:16.638 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:16.638 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.638 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.638 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.638 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.638 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.638 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.638 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.638 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.638 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.638 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.638 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.639 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.639 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.639 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.639 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.639 "name": "Existed_Raid", 00:16:16.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.639 "strip_size_kb": 64, 00:16:16.639 "state": "configuring", 00:16:16.639 "raid_level": "raid5f", 00:16:16.639 "superblock": false, 00:16:16.639 "num_base_bdevs": 4, 00:16:16.639 "num_base_bdevs_discovered": 3, 00:16:16.639 "num_base_bdevs_operational": 4, 00:16:16.639 "base_bdevs_list": [ 00:16:16.639 { 00:16:16.639 "name": "BaseBdev1", 00:16:16.639 "uuid": "c512b137-1b52-463f-9f5c-3bcb5852408e", 00:16:16.639 "is_configured": true, 00:16:16.639 "data_offset": 0, 00:16:16.639 "data_size": 65536 00:16:16.639 }, 00:16:16.639 { 00:16:16.639 "name": null, 00:16:16.639 "uuid": "c149f2b0-0ebc-43be-b9a6-9c42d160075c", 00:16:16.639 "is_configured": false, 00:16:16.639 "data_offset": 0, 00:16:16.639 "data_size": 65536 00:16:16.639 }, 00:16:16.639 { 00:16:16.639 "name": "BaseBdev3", 00:16:16.639 "uuid": "81c0d3b4-31f2-45bd-aff3-ce168b760b50", 00:16:16.639 "is_configured": true, 00:16:16.639 "data_offset": 0, 00:16:16.639 "data_size": 65536 00:16:16.639 }, 00:16:16.639 { 00:16:16.639 "name": "BaseBdev4", 00:16:16.639 "uuid": "b6fcfc6e-3b50-4649-8816-905e17428ae3", 00:16:16.639 "is_configured": true, 00:16:16.639 "data_offset": 0, 00:16:16.639 "data_size": 65536 00:16:16.639 } 00:16:16.639 ] 00:16:16.639 }' 00:16:16.639 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.639 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.898 [2024-11-25 15:43:15.565663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.898 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.158 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.158 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.158 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.158 "name": "Existed_Raid", 00:16:17.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.158 "strip_size_kb": 64, 00:16:17.158 "state": "configuring", 00:16:17.158 "raid_level": "raid5f", 00:16:17.158 "superblock": false, 00:16:17.158 "num_base_bdevs": 4, 00:16:17.158 "num_base_bdevs_discovered": 2, 00:16:17.158 "num_base_bdevs_operational": 4, 00:16:17.158 "base_bdevs_list": [ 00:16:17.158 { 00:16:17.158 "name": "BaseBdev1", 00:16:17.158 "uuid": "c512b137-1b52-463f-9f5c-3bcb5852408e", 00:16:17.158 "is_configured": true, 00:16:17.158 "data_offset": 0, 00:16:17.158 "data_size": 65536 00:16:17.158 }, 00:16:17.158 { 00:16:17.158 "name": null, 00:16:17.158 "uuid": "c149f2b0-0ebc-43be-b9a6-9c42d160075c", 00:16:17.158 "is_configured": false, 00:16:17.158 "data_offset": 0, 00:16:17.158 "data_size": 65536 00:16:17.158 }, 00:16:17.158 { 00:16:17.158 "name": null, 00:16:17.158 "uuid": "81c0d3b4-31f2-45bd-aff3-ce168b760b50", 00:16:17.158 "is_configured": false, 00:16:17.158 "data_offset": 0, 00:16:17.158 "data_size": 65536 00:16:17.158 }, 00:16:17.158 { 00:16:17.158 "name": "BaseBdev4", 00:16:17.158 "uuid": "b6fcfc6e-3b50-4649-8816-905e17428ae3", 00:16:17.158 "is_configured": true, 00:16:17.158 "data_offset": 0, 00:16:17.158 "data_size": 65536 00:16:17.158 } 00:16:17.158 ] 00:16:17.158 }' 00:16:17.158 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.158 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.418 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.418 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.418 15:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:17.418 15:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.418 [2024-11-25 15:43:16.048844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.418 "name": "Existed_Raid", 00:16:17.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.418 "strip_size_kb": 64, 00:16:17.418 "state": "configuring", 00:16:17.418 "raid_level": "raid5f", 00:16:17.418 "superblock": false, 00:16:17.418 "num_base_bdevs": 4, 00:16:17.418 "num_base_bdevs_discovered": 3, 00:16:17.418 "num_base_bdevs_operational": 4, 00:16:17.418 "base_bdevs_list": [ 00:16:17.418 { 00:16:17.418 "name": "BaseBdev1", 00:16:17.418 "uuid": "c512b137-1b52-463f-9f5c-3bcb5852408e", 00:16:17.418 "is_configured": true, 00:16:17.418 "data_offset": 0, 00:16:17.418 "data_size": 65536 00:16:17.418 }, 00:16:17.418 { 00:16:17.418 "name": null, 00:16:17.418 "uuid": "c149f2b0-0ebc-43be-b9a6-9c42d160075c", 00:16:17.418 "is_configured": false, 00:16:17.418 "data_offset": 0, 00:16:17.418 "data_size": 65536 00:16:17.418 }, 00:16:17.418 { 00:16:17.418 "name": "BaseBdev3", 00:16:17.418 "uuid": "81c0d3b4-31f2-45bd-aff3-ce168b760b50", 00:16:17.418 "is_configured": true, 00:16:17.418 "data_offset": 0, 00:16:17.418 "data_size": 65536 00:16:17.418 }, 00:16:17.418 { 00:16:17.418 "name": "BaseBdev4", 00:16:17.418 "uuid": "b6fcfc6e-3b50-4649-8816-905e17428ae3", 00:16:17.418 "is_configured": true, 00:16:17.418 "data_offset": 0, 00:16:17.418 "data_size": 65536 00:16:17.418 } 00:16:17.418 ] 00:16:17.418 }' 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.418 15:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.987 [2024-11-25 15:43:16.532060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.987 15:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.246 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.246 "name": "Existed_Raid", 00:16:18.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.246 "strip_size_kb": 64, 00:16:18.246 "state": "configuring", 00:16:18.246 "raid_level": "raid5f", 00:16:18.246 "superblock": false, 00:16:18.246 "num_base_bdevs": 4, 00:16:18.246 "num_base_bdevs_discovered": 2, 00:16:18.247 "num_base_bdevs_operational": 4, 00:16:18.247 "base_bdevs_list": [ 00:16:18.247 { 00:16:18.247 "name": null, 00:16:18.247 "uuid": "c512b137-1b52-463f-9f5c-3bcb5852408e", 00:16:18.247 "is_configured": false, 00:16:18.247 "data_offset": 0, 00:16:18.247 "data_size": 65536 00:16:18.247 }, 00:16:18.247 { 00:16:18.247 "name": null, 00:16:18.247 "uuid": "c149f2b0-0ebc-43be-b9a6-9c42d160075c", 00:16:18.247 "is_configured": false, 00:16:18.247 "data_offset": 0, 00:16:18.247 "data_size": 65536 00:16:18.247 }, 00:16:18.247 { 00:16:18.247 "name": "BaseBdev3", 00:16:18.247 "uuid": "81c0d3b4-31f2-45bd-aff3-ce168b760b50", 00:16:18.247 "is_configured": true, 00:16:18.247 "data_offset": 0, 00:16:18.247 "data_size": 65536 00:16:18.247 }, 00:16:18.247 { 00:16:18.247 "name": "BaseBdev4", 00:16:18.247 "uuid": "b6fcfc6e-3b50-4649-8816-905e17428ae3", 00:16:18.247 "is_configured": true, 00:16:18.247 "data_offset": 0, 00:16:18.247 "data_size": 65536 00:16:18.247 } 00:16:18.247 ] 00:16:18.247 }' 00:16:18.247 15:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.247 15:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.506 [2024-11-25 15:43:17.107787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.506 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.507 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.507 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.507 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.507 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.507 "name": "Existed_Raid", 00:16:18.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.507 "strip_size_kb": 64, 00:16:18.507 "state": "configuring", 00:16:18.507 "raid_level": "raid5f", 00:16:18.507 "superblock": false, 00:16:18.507 "num_base_bdevs": 4, 00:16:18.507 "num_base_bdevs_discovered": 3, 00:16:18.507 "num_base_bdevs_operational": 4, 00:16:18.507 "base_bdevs_list": [ 00:16:18.507 { 00:16:18.507 "name": null, 00:16:18.507 "uuid": "c512b137-1b52-463f-9f5c-3bcb5852408e", 00:16:18.507 "is_configured": false, 00:16:18.507 "data_offset": 0, 00:16:18.507 "data_size": 65536 00:16:18.507 }, 00:16:18.507 { 00:16:18.507 "name": "BaseBdev2", 00:16:18.507 "uuid": "c149f2b0-0ebc-43be-b9a6-9c42d160075c", 00:16:18.507 "is_configured": true, 00:16:18.507 "data_offset": 0, 00:16:18.507 "data_size": 65536 00:16:18.507 }, 00:16:18.507 { 00:16:18.507 "name": "BaseBdev3", 00:16:18.507 "uuid": "81c0d3b4-31f2-45bd-aff3-ce168b760b50", 00:16:18.507 "is_configured": true, 00:16:18.507 "data_offset": 0, 00:16:18.507 "data_size": 65536 00:16:18.507 }, 00:16:18.507 { 00:16:18.507 "name": "BaseBdev4", 00:16:18.507 "uuid": "b6fcfc6e-3b50-4649-8816-905e17428ae3", 00:16:18.507 "is_configured": true, 00:16:18.507 "data_offset": 0, 00:16:18.507 "data_size": 65536 00:16:18.507 } 00:16:18.507 ] 00:16:18.507 }' 00:16:18.507 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.507 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c512b137-1b52-463f-9f5c-3bcb5852408e 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.076 [2024-11-25 15:43:17.688556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:19.076 [2024-11-25 15:43:17.688607] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:19.076 [2024-11-25 15:43:17.688614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:19.076 [2024-11-25 15:43:17.688848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:19.076 [2024-11-25 15:43:17.695381] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:19.076 [2024-11-25 15:43:17.695408] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:19.076 [2024-11-25 15:43:17.695675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.076 NewBaseBdev 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.076 [ 00:16:19.076 { 00:16:19.076 "name": "NewBaseBdev", 00:16:19.076 "aliases": [ 00:16:19.076 "c512b137-1b52-463f-9f5c-3bcb5852408e" 00:16:19.076 ], 00:16:19.076 "product_name": "Malloc disk", 00:16:19.076 "block_size": 512, 00:16:19.076 "num_blocks": 65536, 00:16:19.076 "uuid": "c512b137-1b52-463f-9f5c-3bcb5852408e", 00:16:19.076 "assigned_rate_limits": { 00:16:19.076 "rw_ios_per_sec": 0, 00:16:19.076 "rw_mbytes_per_sec": 0, 00:16:19.076 "r_mbytes_per_sec": 0, 00:16:19.076 "w_mbytes_per_sec": 0 00:16:19.076 }, 00:16:19.076 "claimed": true, 00:16:19.076 "claim_type": "exclusive_write", 00:16:19.076 "zoned": false, 00:16:19.076 "supported_io_types": { 00:16:19.076 "read": true, 00:16:19.076 "write": true, 00:16:19.076 "unmap": true, 00:16:19.076 "flush": true, 00:16:19.076 "reset": true, 00:16:19.076 "nvme_admin": false, 00:16:19.076 "nvme_io": false, 00:16:19.076 "nvme_io_md": false, 00:16:19.076 "write_zeroes": true, 00:16:19.076 "zcopy": true, 00:16:19.076 "get_zone_info": false, 00:16:19.076 "zone_management": false, 00:16:19.076 "zone_append": false, 00:16:19.076 "compare": false, 00:16:19.076 "compare_and_write": false, 00:16:19.076 "abort": true, 00:16:19.076 "seek_hole": false, 00:16:19.076 "seek_data": false, 00:16:19.076 "copy": true, 00:16:19.076 "nvme_iov_md": false 00:16:19.076 }, 00:16:19.076 "memory_domains": [ 00:16:19.076 { 00:16:19.076 "dma_device_id": "system", 00:16:19.076 "dma_device_type": 1 00:16:19.076 }, 00:16:19.076 { 00:16:19.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.076 "dma_device_type": 2 00:16:19.076 } 00:16:19.076 ], 00:16:19.076 "driver_specific": {} 00:16:19.076 } 00:16:19.076 ] 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.076 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.077 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.077 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.336 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.336 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.336 "name": "Existed_Raid", 00:16:19.336 "uuid": "af751614-3edc-410b-b48a-0659b89a43dc", 00:16:19.336 "strip_size_kb": 64, 00:16:19.336 "state": "online", 00:16:19.336 "raid_level": "raid5f", 00:16:19.336 "superblock": false, 00:16:19.336 "num_base_bdevs": 4, 00:16:19.336 "num_base_bdevs_discovered": 4, 00:16:19.336 "num_base_bdevs_operational": 4, 00:16:19.336 "base_bdevs_list": [ 00:16:19.336 { 00:16:19.336 "name": "NewBaseBdev", 00:16:19.336 "uuid": "c512b137-1b52-463f-9f5c-3bcb5852408e", 00:16:19.336 "is_configured": true, 00:16:19.336 "data_offset": 0, 00:16:19.336 "data_size": 65536 00:16:19.336 }, 00:16:19.336 { 00:16:19.336 "name": "BaseBdev2", 00:16:19.336 "uuid": "c149f2b0-0ebc-43be-b9a6-9c42d160075c", 00:16:19.336 "is_configured": true, 00:16:19.336 "data_offset": 0, 00:16:19.336 "data_size": 65536 00:16:19.336 }, 00:16:19.336 { 00:16:19.336 "name": "BaseBdev3", 00:16:19.336 "uuid": "81c0d3b4-31f2-45bd-aff3-ce168b760b50", 00:16:19.336 "is_configured": true, 00:16:19.336 "data_offset": 0, 00:16:19.336 "data_size": 65536 00:16:19.336 }, 00:16:19.336 { 00:16:19.336 "name": "BaseBdev4", 00:16:19.336 "uuid": "b6fcfc6e-3b50-4649-8816-905e17428ae3", 00:16:19.336 "is_configured": true, 00:16:19.336 "data_offset": 0, 00:16:19.336 "data_size": 65536 00:16:19.336 } 00:16:19.336 ] 00:16:19.336 }' 00:16:19.336 15:43:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.336 15:43:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:19.597 [2024-11-25 15:43:18.103564] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:19.597 "name": "Existed_Raid", 00:16:19.597 "aliases": [ 00:16:19.597 "af751614-3edc-410b-b48a-0659b89a43dc" 00:16:19.597 ], 00:16:19.597 "product_name": "Raid Volume", 00:16:19.597 "block_size": 512, 00:16:19.597 "num_blocks": 196608, 00:16:19.597 "uuid": "af751614-3edc-410b-b48a-0659b89a43dc", 00:16:19.597 "assigned_rate_limits": { 00:16:19.597 "rw_ios_per_sec": 0, 00:16:19.597 "rw_mbytes_per_sec": 0, 00:16:19.597 "r_mbytes_per_sec": 0, 00:16:19.597 "w_mbytes_per_sec": 0 00:16:19.597 }, 00:16:19.597 "claimed": false, 00:16:19.597 "zoned": false, 00:16:19.597 "supported_io_types": { 00:16:19.597 "read": true, 00:16:19.597 "write": true, 00:16:19.597 "unmap": false, 00:16:19.597 "flush": false, 00:16:19.597 "reset": true, 00:16:19.597 "nvme_admin": false, 00:16:19.597 "nvme_io": false, 00:16:19.597 "nvme_io_md": false, 00:16:19.597 "write_zeroes": true, 00:16:19.597 "zcopy": false, 00:16:19.597 "get_zone_info": false, 00:16:19.597 "zone_management": false, 00:16:19.597 "zone_append": false, 00:16:19.597 "compare": false, 00:16:19.597 "compare_and_write": false, 00:16:19.597 "abort": false, 00:16:19.597 "seek_hole": false, 00:16:19.597 "seek_data": false, 00:16:19.597 "copy": false, 00:16:19.597 "nvme_iov_md": false 00:16:19.597 }, 00:16:19.597 "driver_specific": { 00:16:19.597 "raid": { 00:16:19.597 "uuid": "af751614-3edc-410b-b48a-0659b89a43dc", 00:16:19.597 "strip_size_kb": 64, 00:16:19.597 "state": "online", 00:16:19.597 "raid_level": "raid5f", 00:16:19.597 "superblock": false, 00:16:19.597 "num_base_bdevs": 4, 00:16:19.597 "num_base_bdevs_discovered": 4, 00:16:19.597 "num_base_bdevs_operational": 4, 00:16:19.597 "base_bdevs_list": [ 00:16:19.597 { 00:16:19.597 "name": "NewBaseBdev", 00:16:19.597 "uuid": "c512b137-1b52-463f-9f5c-3bcb5852408e", 00:16:19.597 "is_configured": true, 00:16:19.597 "data_offset": 0, 00:16:19.597 "data_size": 65536 00:16:19.597 }, 00:16:19.597 { 00:16:19.597 "name": "BaseBdev2", 00:16:19.597 "uuid": "c149f2b0-0ebc-43be-b9a6-9c42d160075c", 00:16:19.597 "is_configured": true, 00:16:19.597 "data_offset": 0, 00:16:19.597 "data_size": 65536 00:16:19.597 }, 00:16:19.597 { 00:16:19.597 "name": "BaseBdev3", 00:16:19.597 "uuid": "81c0d3b4-31f2-45bd-aff3-ce168b760b50", 00:16:19.597 "is_configured": true, 00:16:19.597 "data_offset": 0, 00:16:19.597 "data_size": 65536 00:16:19.597 }, 00:16:19.597 { 00:16:19.597 "name": "BaseBdev4", 00:16:19.597 "uuid": "b6fcfc6e-3b50-4649-8816-905e17428ae3", 00:16:19.597 "is_configured": true, 00:16:19.597 "data_offset": 0, 00:16:19.597 "data_size": 65536 00:16:19.597 } 00:16:19.597 ] 00:16:19.597 } 00:16:19.597 } 00:16:19.597 }' 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:19.597 BaseBdev2 00:16:19.597 BaseBdev3 00:16:19.597 BaseBdev4' 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.597 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.857 [2024-11-25 15:43:18.422940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:19.857 [2024-11-25 15:43:18.422971] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.857 [2024-11-25 15:43:18.423048] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.857 [2024-11-25 15:43:18.423359] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.857 [2024-11-25 15:43:18.423374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82384 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82384 ']' 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82384 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82384 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:19.857 killing process with pid 82384 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82384' 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82384 00:16:19.857 [2024-11-25 15:43:18.463623] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:19.857 15:43:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82384 00:16:20.427 [2024-11-25 15:43:18.829645] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:21.363 00:16:21.363 real 0m11.067s 00:16:21.363 user 0m17.730s 00:16:21.363 sys 0m1.932s 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.363 ************************************ 00:16:21.363 END TEST raid5f_state_function_test 00:16:21.363 ************************************ 00:16:21.363 15:43:19 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:21.363 15:43:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:21.363 15:43:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.363 15:43:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.363 ************************************ 00:16:21.363 START TEST raid5f_state_function_test_sb 00:16:21.363 ************************************ 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:21.363 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83050 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83050' 00:16:21.364 Process raid pid: 83050 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83050 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83050 ']' 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.364 15:43:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.364 [2024-11-25 15:43:20.022149] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:16:21.364 [2024-11-25 15:43:20.022273] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.623 [2024-11-25 15:43:20.193641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.623 [2024-11-25 15:43:20.298251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.883 [2024-11-25 15:43:20.479042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.883 [2024-11-25 15:43:20.479075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.452 15:43:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.452 15:43:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:22.452 15:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:22.452 15:43:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.452 15:43:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.452 [2024-11-25 15:43:20.840433] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:22.452 [2024-11-25 15:43:20.840480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:22.452 [2024-11-25 15:43:20.840495] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:22.452 [2024-11-25 15:43:20.840504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:22.453 [2024-11-25 15:43:20.840510] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:22.453 [2024-11-25 15:43:20.840518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:22.453 [2024-11-25 15:43:20.840524] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:22.453 [2024-11-25 15:43:20.840532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:22.453 15:43:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.453 15:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:22.453 15:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.453 15:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.453 15:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.453 15:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.453 15:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.453 15:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.453 15:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.453 15:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.453 15:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.453 15:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.453 15:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.453 15:43:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.453 15:43:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.453 15:43:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.453 15:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.453 "name": "Existed_Raid", 00:16:22.453 "uuid": "931b6d87-d05b-4023-926f-8e8f961fa6fc", 00:16:22.453 "strip_size_kb": 64, 00:16:22.453 "state": "configuring", 00:16:22.453 "raid_level": "raid5f", 00:16:22.453 "superblock": true, 00:16:22.453 "num_base_bdevs": 4, 00:16:22.453 "num_base_bdevs_discovered": 0, 00:16:22.453 "num_base_bdevs_operational": 4, 00:16:22.453 "base_bdevs_list": [ 00:16:22.453 { 00:16:22.453 "name": "BaseBdev1", 00:16:22.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.453 "is_configured": false, 00:16:22.453 "data_offset": 0, 00:16:22.453 "data_size": 0 00:16:22.453 }, 00:16:22.453 { 00:16:22.453 "name": "BaseBdev2", 00:16:22.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.453 "is_configured": false, 00:16:22.453 "data_offset": 0, 00:16:22.453 "data_size": 0 00:16:22.453 }, 00:16:22.453 { 00:16:22.453 "name": "BaseBdev3", 00:16:22.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.453 "is_configured": false, 00:16:22.453 "data_offset": 0, 00:16:22.453 "data_size": 0 00:16:22.453 }, 00:16:22.453 { 00:16:22.453 "name": "BaseBdev4", 00:16:22.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.453 "is_configured": false, 00:16:22.453 "data_offset": 0, 00:16:22.453 "data_size": 0 00:16:22.453 } 00:16:22.453 ] 00:16:22.453 }' 00:16:22.453 15:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.453 15:43:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.714 [2024-11-25 15:43:21.255642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:22.714 [2024-11-25 15:43:21.255681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.714 [2024-11-25 15:43:21.263650] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:22.714 [2024-11-25 15:43:21.263687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:22.714 [2024-11-25 15:43:21.263695] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:22.714 [2024-11-25 15:43:21.263704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:22.714 [2024-11-25 15:43:21.263710] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:22.714 [2024-11-25 15:43:21.263718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:22.714 [2024-11-25 15:43:21.263724] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:22.714 [2024-11-25 15:43:21.263732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.714 [2024-11-25 15:43:21.304940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.714 BaseBdev1 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.714 [ 00:16:22.714 { 00:16:22.714 "name": "BaseBdev1", 00:16:22.714 "aliases": [ 00:16:22.714 "23f1a178-47ee-4df4-b653-f49e78899b28" 00:16:22.714 ], 00:16:22.714 "product_name": "Malloc disk", 00:16:22.714 "block_size": 512, 00:16:22.714 "num_blocks": 65536, 00:16:22.714 "uuid": "23f1a178-47ee-4df4-b653-f49e78899b28", 00:16:22.714 "assigned_rate_limits": { 00:16:22.714 "rw_ios_per_sec": 0, 00:16:22.714 "rw_mbytes_per_sec": 0, 00:16:22.714 "r_mbytes_per_sec": 0, 00:16:22.714 "w_mbytes_per_sec": 0 00:16:22.714 }, 00:16:22.714 "claimed": true, 00:16:22.714 "claim_type": "exclusive_write", 00:16:22.714 "zoned": false, 00:16:22.714 "supported_io_types": { 00:16:22.714 "read": true, 00:16:22.714 "write": true, 00:16:22.714 "unmap": true, 00:16:22.714 "flush": true, 00:16:22.714 "reset": true, 00:16:22.714 "nvme_admin": false, 00:16:22.714 "nvme_io": false, 00:16:22.714 "nvme_io_md": false, 00:16:22.714 "write_zeroes": true, 00:16:22.714 "zcopy": true, 00:16:22.714 "get_zone_info": false, 00:16:22.714 "zone_management": false, 00:16:22.714 "zone_append": false, 00:16:22.714 "compare": false, 00:16:22.714 "compare_and_write": false, 00:16:22.714 "abort": true, 00:16:22.714 "seek_hole": false, 00:16:22.714 "seek_data": false, 00:16:22.714 "copy": true, 00:16:22.714 "nvme_iov_md": false 00:16:22.714 }, 00:16:22.714 "memory_domains": [ 00:16:22.714 { 00:16:22.714 "dma_device_id": "system", 00:16:22.714 "dma_device_type": 1 00:16:22.714 }, 00:16:22.714 { 00:16:22.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.714 "dma_device_type": 2 00:16:22.714 } 00:16:22.714 ], 00:16:22.714 "driver_specific": {} 00:16:22.714 } 00:16:22.714 ] 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:22.714 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.715 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.715 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.715 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.715 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.715 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.715 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.715 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.715 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.715 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.715 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.715 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.715 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.715 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.715 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.715 "name": "Existed_Raid", 00:16:22.715 "uuid": "dc1bf9b1-90e0-4551-9321-eb1983bdf266", 00:16:22.715 "strip_size_kb": 64, 00:16:22.715 "state": "configuring", 00:16:22.715 "raid_level": "raid5f", 00:16:22.715 "superblock": true, 00:16:22.715 "num_base_bdevs": 4, 00:16:22.715 "num_base_bdevs_discovered": 1, 00:16:22.715 "num_base_bdevs_operational": 4, 00:16:22.715 "base_bdevs_list": [ 00:16:22.715 { 00:16:22.715 "name": "BaseBdev1", 00:16:22.715 "uuid": "23f1a178-47ee-4df4-b653-f49e78899b28", 00:16:22.715 "is_configured": true, 00:16:22.715 "data_offset": 2048, 00:16:22.715 "data_size": 63488 00:16:22.715 }, 00:16:22.715 { 00:16:22.715 "name": "BaseBdev2", 00:16:22.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.715 "is_configured": false, 00:16:22.715 "data_offset": 0, 00:16:22.715 "data_size": 0 00:16:22.715 }, 00:16:22.715 { 00:16:22.715 "name": "BaseBdev3", 00:16:22.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.715 "is_configured": false, 00:16:22.715 "data_offset": 0, 00:16:22.715 "data_size": 0 00:16:22.715 }, 00:16:22.715 { 00:16:22.715 "name": "BaseBdev4", 00:16:22.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.715 "is_configured": false, 00:16:22.715 "data_offset": 0, 00:16:22.715 "data_size": 0 00:16:22.715 } 00:16:22.715 ] 00:16:22.715 }' 00:16:22.715 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.715 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.285 [2024-11-25 15:43:21.748203] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:23.285 [2024-11-25 15:43:21.748252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.285 [2024-11-25 15:43:21.756249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.285 [2024-11-25 15:43:21.758037] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:23.285 [2024-11-25 15:43:21.758071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.285 [2024-11-25 15:43:21.758081] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:23.285 [2024-11-25 15:43:21.758091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:23.285 [2024-11-25 15:43:21.758097] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:23.285 [2024-11-25 15:43:21.758105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.285 "name": "Existed_Raid", 00:16:23.285 "uuid": "4a35b0c9-c804-4571-adb9-67ef41b7ba2b", 00:16:23.285 "strip_size_kb": 64, 00:16:23.285 "state": "configuring", 00:16:23.285 "raid_level": "raid5f", 00:16:23.285 "superblock": true, 00:16:23.285 "num_base_bdevs": 4, 00:16:23.285 "num_base_bdevs_discovered": 1, 00:16:23.285 "num_base_bdevs_operational": 4, 00:16:23.285 "base_bdevs_list": [ 00:16:23.285 { 00:16:23.285 "name": "BaseBdev1", 00:16:23.285 "uuid": "23f1a178-47ee-4df4-b653-f49e78899b28", 00:16:23.285 "is_configured": true, 00:16:23.285 "data_offset": 2048, 00:16:23.285 "data_size": 63488 00:16:23.285 }, 00:16:23.285 { 00:16:23.285 "name": "BaseBdev2", 00:16:23.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.285 "is_configured": false, 00:16:23.285 "data_offset": 0, 00:16:23.285 "data_size": 0 00:16:23.285 }, 00:16:23.285 { 00:16:23.285 "name": "BaseBdev3", 00:16:23.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.285 "is_configured": false, 00:16:23.285 "data_offset": 0, 00:16:23.285 "data_size": 0 00:16:23.285 }, 00:16:23.285 { 00:16:23.285 "name": "BaseBdev4", 00:16:23.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.285 "is_configured": false, 00:16:23.285 "data_offset": 0, 00:16:23.285 "data_size": 0 00:16:23.285 } 00:16:23.285 ] 00:16:23.285 }' 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.285 15:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.546 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:23.546 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.546 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.546 [2024-11-25 15:43:22.189919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:23.546 BaseBdev2 00:16:23.546 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.546 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:23.546 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:23.546 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:23.546 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:23.546 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:23.546 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:23.546 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:23.546 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.546 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.546 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.546 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:23.546 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.546 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.546 [ 00:16:23.546 { 00:16:23.546 "name": "BaseBdev2", 00:16:23.546 "aliases": [ 00:16:23.546 "6a99054e-54dd-4fc6-b299-819a69177a06" 00:16:23.546 ], 00:16:23.546 "product_name": "Malloc disk", 00:16:23.546 "block_size": 512, 00:16:23.546 "num_blocks": 65536, 00:16:23.546 "uuid": "6a99054e-54dd-4fc6-b299-819a69177a06", 00:16:23.546 "assigned_rate_limits": { 00:16:23.546 "rw_ios_per_sec": 0, 00:16:23.546 "rw_mbytes_per_sec": 0, 00:16:23.546 "r_mbytes_per_sec": 0, 00:16:23.546 "w_mbytes_per_sec": 0 00:16:23.546 }, 00:16:23.546 "claimed": true, 00:16:23.546 "claim_type": "exclusive_write", 00:16:23.546 "zoned": false, 00:16:23.546 "supported_io_types": { 00:16:23.546 "read": true, 00:16:23.546 "write": true, 00:16:23.546 "unmap": true, 00:16:23.546 "flush": true, 00:16:23.546 "reset": true, 00:16:23.546 "nvme_admin": false, 00:16:23.546 "nvme_io": false, 00:16:23.546 "nvme_io_md": false, 00:16:23.546 "write_zeroes": true, 00:16:23.546 "zcopy": true, 00:16:23.546 "get_zone_info": false, 00:16:23.546 "zone_management": false, 00:16:23.546 "zone_append": false, 00:16:23.546 "compare": false, 00:16:23.547 "compare_and_write": false, 00:16:23.547 "abort": true, 00:16:23.547 "seek_hole": false, 00:16:23.547 "seek_data": false, 00:16:23.547 "copy": true, 00:16:23.547 "nvme_iov_md": false 00:16:23.547 }, 00:16:23.547 "memory_domains": [ 00:16:23.547 { 00:16:23.547 "dma_device_id": "system", 00:16:23.547 "dma_device_type": 1 00:16:23.547 }, 00:16:23.547 { 00:16:23.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.547 "dma_device_type": 2 00:16:23.547 } 00:16:23.547 ], 00:16:23.547 "driver_specific": {} 00:16:23.547 } 00:16:23.547 ] 00:16:23.547 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.547 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:23.547 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:23.547 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:23.806 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:23.807 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.807 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.807 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.807 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.807 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.807 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.807 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.807 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.807 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.807 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.807 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.807 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.807 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.807 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.807 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.807 "name": "Existed_Raid", 00:16:23.807 "uuid": "4a35b0c9-c804-4571-adb9-67ef41b7ba2b", 00:16:23.807 "strip_size_kb": 64, 00:16:23.807 "state": "configuring", 00:16:23.807 "raid_level": "raid5f", 00:16:23.807 "superblock": true, 00:16:23.807 "num_base_bdevs": 4, 00:16:23.807 "num_base_bdevs_discovered": 2, 00:16:23.807 "num_base_bdevs_operational": 4, 00:16:23.807 "base_bdevs_list": [ 00:16:23.807 { 00:16:23.807 "name": "BaseBdev1", 00:16:23.807 "uuid": "23f1a178-47ee-4df4-b653-f49e78899b28", 00:16:23.807 "is_configured": true, 00:16:23.807 "data_offset": 2048, 00:16:23.807 "data_size": 63488 00:16:23.807 }, 00:16:23.807 { 00:16:23.807 "name": "BaseBdev2", 00:16:23.807 "uuid": "6a99054e-54dd-4fc6-b299-819a69177a06", 00:16:23.807 "is_configured": true, 00:16:23.807 "data_offset": 2048, 00:16:23.807 "data_size": 63488 00:16:23.807 }, 00:16:23.807 { 00:16:23.807 "name": "BaseBdev3", 00:16:23.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.807 "is_configured": false, 00:16:23.807 "data_offset": 0, 00:16:23.807 "data_size": 0 00:16:23.807 }, 00:16:23.807 { 00:16:23.807 "name": "BaseBdev4", 00:16:23.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.807 "is_configured": false, 00:16:23.807 "data_offset": 0, 00:16:23.807 "data_size": 0 00:16:23.807 } 00:16:23.807 ] 00:16:23.807 }' 00:16:23.807 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.807 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.066 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:24.066 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.066 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.066 [2024-11-25 15:43:22.726720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:24.066 BaseBdev3 00:16:24.066 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.066 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:24.066 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:24.066 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:24.066 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:24.066 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:24.066 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:24.066 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:24.067 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.067 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.067 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.067 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:24.067 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.067 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.337 [ 00:16:24.337 { 00:16:24.337 "name": "BaseBdev3", 00:16:24.337 "aliases": [ 00:16:24.337 "0ab90154-b74c-44cf-866f-0d6a92d24169" 00:16:24.337 ], 00:16:24.337 "product_name": "Malloc disk", 00:16:24.337 "block_size": 512, 00:16:24.337 "num_blocks": 65536, 00:16:24.337 "uuid": "0ab90154-b74c-44cf-866f-0d6a92d24169", 00:16:24.337 "assigned_rate_limits": { 00:16:24.337 "rw_ios_per_sec": 0, 00:16:24.337 "rw_mbytes_per_sec": 0, 00:16:24.337 "r_mbytes_per_sec": 0, 00:16:24.337 "w_mbytes_per_sec": 0 00:16:24.337 }, 00:16:24.337 "claimed": true, 00:16:24.337 "claim_type": "exclusive_write", 00:16:24.337 "zoned": false, 00:16:24.337 "supported_io_types": { 00:16:24.337 "read": true, 00:16:24.337 "write": true, 00:16:24.337 "unmap": true, 00:16:24.337 "flush": true, 00:16:24.337 "reset": true, 00:16:24.337 "nvme_admin": false, 00:16:24.337 "nvme_io": false, 00:16:24.337 "nvme_io_md": false, 00:16:24.337 "write_zeroes": true, 00:16:24.337 "zcopy": true, 00:16:24.337 "get_zone_info": false, 00:16:24.337 "zone_management": false, 00:16:24.337 "zone_append": false, 00:16:24.337 "compare": false, 00:16:24.337 "compare_and_write": false, 00:16:24.337 "abort": true, 00:16:24.337 "seek_hole": false, 00:16:24.337 "seek_data": false, 00:16:24.337 "copy": true, 00:16:24.337 "nvme_iov_md": false 00:16:24.337 }, 00:16:24.337 "memory_domains": [ 00:16:24.337 { 00:16:24.337 "dma_device_id": "system", 00:16:24.337 "dma_device_type": 1 00:16:24.337 }, 00:16:24.337 { 00:16:24.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.337 "dma_device_type": 2 00:16:24.337 } 00:16:24.337 ], 00:16:24.337 "driver_specific": {} 00:16:24.337 } 00:16:24.337 ] 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.337 "name": "Existed_Raid", 00:16:24.337 "uuid": "4a35b0c9-c804-4571-adb9-67ef41b7ba2b", 00:16:24.337 "strip_size_kb": 64, 00:16:24.337 "state": "configuring", 00:16:24.337 "raid_level": "raid5f", 00:16:24.337 "superblock": true, 00:16:24.337 "num_base_bdevs": 4, 00:16:24.337 "num_base_bdevs_discovered": 3, 00:16:24.337 "num_base_bdevs_operational": 4, 00:16:24.337 "base_bdevs_list": [ 00:16:24.337 { 00:16:24.337 "name": "BaseBdev1", 00:16:24.337 "uuid": "23f1a178-47ee-4df4-b653-f49e78899b28", 00:16:24.337 "is_configured": true, 00:16:24.337 "data_offset": 2048, 00:16:24.337 "data_size": 63488 00:16:24.337 }, 00:16:24.337 { 00:16:24.337 "name": "BaseBdev2", 00:16:24.337 "uuid": "6a99054e-54dd-4fc6-b299-819a69177a06", 00:16:24.337 "is_configured": true, 00:16:24.337 "data_offset": 2048, 00:16:24.337 "data_size": 63488 00:16:24.337 }, 00:16:24.337 { 00:16:24.337 "name": "BaseBdev3", 00:16:24.337 "uuid": "0ab90154-b74c-44cf-866f-0d6a92d24169", 00:16:24.337 "is_configured": true, 00:16:24.337 "data_offset": 2048, 00:16:24.337 "data_size": 63488 00:16:24.337 }, 00:16:24.337 { 00:16:24.337 "name": "BaseBdev4", 00:16:24.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.337 "is_configured": false, 00:16:24.337 "data_offset": 0, 00:16:24.337 "data_size": 0 00:16:24.337 } 00:16:24.337 ] 00:16:24.337 }' 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.337 15:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.624 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:24.624 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.624 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.624 [2024-11-25 15:43:23.255653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:24.624 [2024-11-25 15:43:23.255954] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:24.624 [2024-11-25 15:43:23.255969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:24.624 [2024-11-25 15:43:23.256273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:24.624 BaseBdev4 00:16:24.624 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.624 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:24.624 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:24.624 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:24.624 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:24.624 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:24.624 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:24.624 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:24.624 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.624 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.624 [2024-11-25 15:43:23.263889] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:24.624 [2024-11-25 15:43:23.263916] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:24.624 [2024-11-25 15:43:23.264166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.624 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.624 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:24.624 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.624 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.624 [ 00:16:24.624 { 00:16:24.624 "name": "BaseBdev4", 00:16:24.624 "aliases": [ 00:16:24.624 "be9db921-3b64-451c-9e02-ed8d82165e66" 00:16:24.624 ], 00:16:24.624 "product_name": "Malloc disk", 00:16:24.624 "block_size": 512, 00:16:24.624 "num_blocks": 65536, 00:16:24.624 "uuid": "be9db921-3b64-451c-9e02-ed8d82165e66", 00:16:24.624 "assigned_rate_limits": { 00:16:24.624 "rw_ios_per_sec": 0, 00:16:24.624 "rw_mbytes_per_sec": 0, 00:16:24.624 "r_mbytes_per_sec": 0, 00:16:24.624 "w_mbytes_per_sec": 0 00:16:24.624 }, 00:16:24.624 "claimed": true, 00:16:24.624 "claim_type": "exclusive_write", 00:16:24.624 "zoned": false, 00:16:24.625 "supported_io_types": { 00:16:24.625 "read": true, 00:16:24.625 "write": true, 00:16:24.625 "unmap": true, 00:16:24.625 "flush": true, 00:16:24.625 "reset": true, 00:16:24.625 "nvme_admin": false, 00:16:24.625 "nvme_io": false, 00:16:24.625 "nvme_io_md": false, 00:16:24.625 "write_zeroes": true, 00:16:24.625 "zcopy": true, 00:16:24.625 "get_zone_info": false, 00:16:24.625 "zone_management": false, 00:16:24.625 "zone_append": false, 00:16:24.625 "compare": false, 00:16:24.625 "compare_and_write": false, 00:16:24.625 "abort": true, 00:16:24.625 "seek_hole": false, 00:16:24.625 "seek_data": false, 00:16:24.625 "copy": true, 00:16:24.625 "nvme_iov_md": false 00:16:24.625 }, 00:16:24.625 "memory_domains": [ 00:16:24.625 { 00:16:24.625 "dma_device_id": "system", 00:16:24.625 "dma_device_type": 1 00:16:24.625 }, 00:16:24.625 { 00:16:24.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.625 "dma_device_type": 2 00:16:24.625 } 00:16:24.625 ], 00:16:24.625 "driver_specific": {} 00:16:24.625 } 00:16:24.625 ] 00:16:24.625 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.625 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:24.625 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:24.625 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:24.625 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:24.625 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.625 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.625 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.625 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.625 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.625 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.625 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.625 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.625 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.885 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.885 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.885 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.885 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.885 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.885 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.885 "name": "Existed_Raid", 00:16:24.885 "uuid": "4a35b0c9-c804-4571-adb9-67ef41b7ba2b", 00:16:24.885 "strip_size_kb": 64, 00:16:24.885 "state": "online", 00:16:24.885 "raid_level": "raid5f", 00:16:24.885 "superblock": true, 00:16:24.885 "num_base_bdevs": 4, 00:16:24.885 "num_base_bdevs_discovered": 4, 00:16:24.885 "num_base_bdevs_operational": 4, 00:16:24.885 "base_bdevs_list": [ 00:16:24.885 { 00:16:24.885 "name": "BaseBdev1", 00:16:24.885 "uuid": "23f1a178-47ee-4df4-b653-f49e78899b28", 00:16:24.885 "is_configured": true, 00:16:24.885 "data_offset": 2048, 00:16:24.885 "data_size": 63488 00:16:24.885 }, 00:16:24.885 { 00:16:24.885 "name": "BaseBdev2", 00:16:24.885 "uuid": "6a99054e-54dd-4fc6-b299-819a69177a06", 00:16:24.885 "is_configured": true, 00:16:24.885 "data_offset": 2048, 00:16:24.885 "data_size": 63488 00:16:24.885 }, 00:16:24.885 { 00:16:24.885 "name": "BaseBdev3", 00:16:24.885 "uuid": "0ab90154-b74c-44cf-866f-0d6a92d24169", 00:16:24.885 "is_configured": true, 00:16:24.885 "data_offset": 2048, 00:16:24.885 "data_size": 63488 00:16:24.885 }, 00:16:24.885 { 00:16:24.885 "name": "BaseBdev4", 00:16:24.885 "uuid": "be9db921-3b64-451c-9e02-ed8d82165e66", 00:16:24.885 "is_configured": true, 00:16:24.885 "data_offset": 2048, 00:16:24.885 "data_size": 63488 00:16:24.885 } 00:16:24.885 ] 00:16:24.885 }' 00:16:24.885 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.885 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.144 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:25.144 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:25.144 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:25.144 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:25.145 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:25.145 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:25.145 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:25.145 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.145 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.145 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:25.145 [2024-11-25 15:43:23.671774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.145 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.145 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:25.145 "name": "Existed_Raid", 00:16:25.145 "aliases": [ 00:16:25.145 "4a35b0c9-c804-4571-adb9-67ef41b7ba2b" 00:16:25.145 ], 00:16:25.145 "product_name": "Raid Volume", 00:16:25.145 "block_size": 512, 00:16:25.145 "num_blocks": 190464, 00:16:25.145 "uuid": "4a35b0c9-c804-4571-adb9-67ef41b7ba2b", 00:16:25.145 "assigned_rate_limits": { 00:16:25.145 "rw_ios_per_sec": 0, 00:16:25.145 "rw_mbytes_per_sec": 0, 00:16:25.145 "r_mbytes_per_sec": 0, 00:16:25.145 "w_mbytes_per_sec": 0 00:16:25.145 }, 00:16:25.145 "claimed": false, 00:16:25.145 "zoned": false, 00:16:25.145 "supported_io_types": { 00:16:25.145 "read": true, 00:16:25.145 "write": true, 00:16:25.145 "unmap": false, 00:16:25.145 "flush": false, 00:16:25.145 "reset": true, 00:16:25.145 "nvme_admin": false, 00:16:25.145 "nvme_io": false, 00:16:25.145 "nvme_io_md": false, 00:16:25.145 "write_zeroes": true, 00:16:25.145 "zcopy": false, 00:16:25.145 "get_zone_info": false, 00:16:25.145 "zone_management": false, 00:16:25.145 "zone_append": false, 00:16:25.145 "compare": false, 00:16:25.145 "compare_and_write": false, 00:16:25.145 "abort": false, 00:16:25.145 "seek_hole": false, 00:16:25.145 "seek_data": false, 00:16:25.145 "copy": false, 00:16:25.145 "nvme_iov_md": false 00:16:25.145 }, 00:16:25.145 "driver_specific": { 00:16:25.145 "raid": { 00:16:25.145 "uuid": "4a35b0c9-c804-4571-adb9-67ef41b7ba2b", 00:16:25.145 "strip_size_kb": 64, 00:16:25.145 "state": "online", 00:16:25.145 "raid_level": "raid5f", 00:16:25.145 "superblock": true, 00:16:25.145 "num_base_bdevs": 4, 00:16:25.145 "num_base_bdevs_discovered": 4, 00:16:25.145 "num_base_bdevs_operational": 4, 00:16:25.145 "base_bdevs_list": [ 00:16:25.145 { 00:16:25.145 "name": "BaseBdev1", 00:16:25.145 "uuid": "23f1a178-47ee-4df4-b653-f49e78899b28", 00:16:25.145 "is_configured": true, 00:16:25.145 "data_offset": 2048, 00:16:25.145 "data_size": 63488 00:16:25.145 }, 00:16:25.145 { 00:16:25.145 "name": "BaseBdev2", 00:16:25.145 "uuid": "6a99054e-54dd-4fc6-b299-819a69177a06", 00:16:25.145 "is_configured": true, 00:16:25.145 "data_offset": 2048, 00:16:25.145 "data_size": 63488 00:16:25.145 }, 00:16:25.145 { 00:16:25.145 "name": "BaseBdev3", 00:16:25.145 "uuid": "0ab90154-b74c-44cf-866f-0d6a92d24169", 00:16:25.145 "is_configured": true, 00:16:25.145 "data_offset": 2048, 00:16:25.145 "data_size": 63488 00:16:25.145 }, 00:16:25.145 { 00:16:25.145 "name": "BaseBdev4", 00:16:25.145 "uuid": "be9db921-3b64-451c-9e02-ed8d82165e66", 00:16:25.145 "is_configured": true, 00:16:25.145 "data_offset": 2048, 00:16:25.145 "data_size": 63488 00:16:25.145 } 00:16:25.145 ] 00:16:25.145 } 00:16:25.145 } 00:16:25.145 }' 00:16:25.145 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:25.145 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:25.145 BaseBdev2 00:16:25.145 BaseBdev3 00:16:25.145 BaseBdev4' 00:16:25.145 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.145 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:25.145 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.145 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.145 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:25.145 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.145 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.404 15:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.404 [2024-11-25 15:43:23.995124] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.663 "name": "Existed_Raid", 00:16:25.663 "uuid": "4a35b0c9-c804-4571-adb9-67ef41b7ba2b", 00:16:25.663 "strip_size_kb": 64, 00:16:25.663 "state": "online", 00:16:25.663 "raid_level": "raid5f", 00:16:25.663 "superblock": true, 00:16:25.663 "num_base_bdevs": 4, 00:16:25.663 "num_base_bdevs_discovered": 3, 00:16:25.663 "num_base_bdevs_operational": 3, 00:16:25.663 "base_bdevs_list": [ 00:16:25.663 { 00:16:25.663 "name": null, 00:16:25.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.663 "is_configured": false, 00:16:25.663 "data_offset": 0, 00:16:25.663 "data_size": 63488 00:16:25.663 }, 00:16:25.663 { 00:16:25.663 "name": "BaseBdev2", 00:16:25.663 "uuid": "6a99054e-54dd-4fc6-b299-819a69177a06", 00:16:25.663 "is_configured": true, 00:16:25.663 "data_offset": 2048, 00:16:25.663 "data_size": 63488 00:16:25.663 }, 00:16:25.663 { 00:16:25.663 "name": "BaseBdev3", 00:16:25.663 "uuid": "0ab90154-b74c-44cf-866f-0d6a92d24169", 00:16:25.663 "is_configured": true, 00:16:25.663 "data_offset": 2048, 00:16:25.663 "data_size": 63488 00:16:25.663 }, 00:16:25.663 { 00:16:25.663 "name": "BaseBdev4", 00:16:25.663 "uuid": "be9db921-3b64-451c-9e02-ed8d82165e66", 00:16:25.663 "is_configured": true, 00:16:25.663 "data_offset": 2048, 00:16:25.663 "data_size": 63488 00:16:25.663 } 00:16:25.663 ] 00:16:25.663 }' 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.663 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.923 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:25.923 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:25.923 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.923 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.923 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.923 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:25.923 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.182 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:26.182 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:26.182 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:26.182 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.182 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.182 [2024-11-25 15:43:24.623886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:26.182 [2024-11-25 15:43:24.624130] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.182 [2024-11-25 15:43:24.713082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.182 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.182 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:26.182 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:26.182 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.182 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.182 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:26.182 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.182 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.182 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:26.182 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:26.182 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:26.182 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.182 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.182 [2024-11-25 15:43:24.772945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:26.442 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.442 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:26.442 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:26.442 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.442 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:26.442 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.442 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.442 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.442 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:26.442 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:26.442 15:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:26.442 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.442 15:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.442 [2024-11-25 15:43:24.917152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:26.442 [2024-11-25 15:43:24.917198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.442 BaseBdev2 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.442 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.702 [ 00:16:26.702 { 00:16:26.702 "name": "BaseBdev2", 00:16:26.702 "aliases": [ 00:16:26.702 "b436c33d-4554-40ea-956a-2d0dc1990351" 00:16:26.702 ], 00:16:26.702 "product_name": "Malloc disk", 00:16:26.702 "block_size": 512, 00:16:26.702 "num_blocks": 65536, 00:16:26.702 "uuid": "b436c33d-4554-40ea-956a-2d0dc1990351", 00:16:26.702 "assigned_rate_limits": { 00:16:26.702 "rw_ios_per_sec": 0, 00:16:26.702 "rw_mbytes_per_sec": 0, 00:16:26.702 "r_mbytes_per_sec": 0, 00:16:26.702 "w_mbytes_per_sec": 0 00:16:26.702 }, 00:16:26.702 "claimed": false, 00:16:26.702 "zoned": false, 00:16:26.702 "supported_io_types": { 00:16:26.702 "read": true, 00:16:26.702 "write": true, 00:16:26.702 "unmap": true, 00:16:26.702 "flush": true, 00:16:26.702 "reset": true, 00:16:26.702 "nvme_admin": false, 00:16:26.702 "nvme_io": false, 00:16:26.702 "nvme_io_md": false, 00:16:26.702 "write_zeroes": true, 00:16:26.702 "zcopy": true, 00:16:26.702 "get_zone_info": false, 00:16:26.702 "zone_management": false, 00:16:26.702 "zone_append": false, 00:16:26.702 "compare": false, 00:16:26.702 "compare_and_write": false, 00:16:26.702 "abort": true, 00:16:26.702 "seek_hole": false, 00:16:26.702 "seek_data": false, 00:16:26.702 "copy": true, 00:16:26.702 "nvme_iov_md": false 00:16:26.702 }, 00:16:26.702 "memory_domains": [ 00:16:26.702 { 00:16:26.702 "dma_device_id": "system", 00:16:26.702 "dma_device_type": 1 00:16:26.702 }, 00:16:26.702 { 00:16:26.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.702 "dma_device_type": 2 00:16:26.702 } 00:16:26.702 ], 00:16:26.702 "driver_specific": {} 00:16:26.702 } 00:16:26.702 ] 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.702 BaseBdev3 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.702 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.702 [ 00:16:26.702 { 00:16:26.703 "name": "BaseBdev3", 00:16:26.703 "aliases": [ 00:16:26.703 "2e982ddd-2991-497d-a55e-d38791fac74b" 00:16:26.703 ], 00:16:26.703 "product_name": "Malloc disk", 00:16:26.703 "block_size": 512, 00:16:26.703 "num_blocks": 65536, 00:16:26.703 "uuid": "2e982ddd-2991-497d-a55e-d38791fac74b", 00:16:26.703 "assigned_rate_limits": { 00:16:26.703 "rw_ios_per_sec": 0, 00:16:26.703 "rw_mbytes_per_sec": 0, 00:16:26.703 "r_mbytes_per_sec": 0, 00:16:26.703 "w_mbytes_per_sec": 0 00:16:26.703 }, 00:16:26.703 "claimed": false, 00:16:26.703 "zoned": false, 00:16:26.703 "supported_io_types": { 00:16:26.703 "read": true, 00:16:26.703 "write": true, 00:16:26.703 "unmap": true, 00:16:26.703 "flush": true, 00:16:26.703 "reset": true, 00:16:26.703 "nvme_admin": false, 00:16:26.703 "nvme_io": false, 00:16:26.703 "nvme_io_md": false, 00:16:26.703 "write_zeroes": true, 00:16:26.703 "zcopy": true, 00:16:26.703 "get_zone_info": false, 00:16:26.703 "zone_management": false, 00:16:26.703 "zone_append": false, 00:16:26.703 "compare": false, 00:16:26.703 "compare_and_write": false, 00:16:26.703 "abort": true, 00:16:26.703 "seek_hole": false, 00:16:26.703 "seek_data": false, 00:16:26.703 "copy": true, 00:16:26.703 "nvme_iov_md": false 00:16:26.703 }, 00:16:26.703 "memory_domains": [ 00:16:26.703 { 00:16:26.703 "dma_device_id": "system", 00:16:26.703 "dma_device_type": 1 00:16:26.703 }, 00:16:26.703 { 00:16:26.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.703 "dma_device_type": 2 00:16:26.703 } 00:16:26.703 ], 00:16:26.703 "driver_specific": {} 00:16:26.703 } 00:16:26.703 ] 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.703 BaseBdev4 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.703 [ 00:16:26.703 { 00:16:26.703 "name": "BaseBdev4", 00:16:26.703 "aliases": [ 00:16:26.703 "bfd029a8-e122-4545-9b5f-40114d5e27f5" 00:16:26.703 ], 00:16:26.703 "product_name": "Malloc disk", 00:16:26.703 "block_size": 512, 00:16:26.703 "num_blocks": 65536, 00:16:26.703 "uuid": "bfd029a8-e122-4545-9b5f-40114d5e27f5", 00:16:26.703 "assigned_rate_limits": { 00:16:26.703 "rw_ios_per_sec": 0, 00:16:26.703 "rw_mbytes_per_sec": 0, 00:16:26.703 "r_mbytes_per_sec": 0, 00:16:26.703 "w_mbytes_per_sec": 0 00:16:26.703 }, 00:16:26.703 "claimed": false, 00:16:26.703 "zoned": false, 00:16:26.703 "supported_io_types": { 00:16:26.703 "read": true, 00:16:26.703 "write": true, 00:16:26.703 "unmap": true, 00:16:26.703 "flush": true, 00:16:26.703 "reset": true, 00:16:26.703 "nvme_admin": false, 00:16:26.703 "nvme_io": false, 00:16:26.703 "nvme_io_md": false, 00:16:26.703 "write_zeroes": true, 00:16:26.703 "zcopy": true, 00:16:26.703 "get_zone_info": false, 00:16:26.703 "zone_management": false, 00:16:26.703 "zone_append": false, 00:16:26.703 "compare": false, 00:16:26.703 "compare_and_write": false, 00:16:26.703 "abort": true, 00:16:26.703 "seek_hole": false, 00:16:26.703 "seek_data": false, 00:16:26.703 "copy": true, 00:16:26.703 "nvme_iov_md": false 00:16:26.703 }, 00:16:26.703 "memory_domains": [ 00:16:26.703 { 00:16:26.703 "dma_device_id": "system", 00:16:26.703 "dma_device_type": 1 00:16:26.703 }, 00:16:26.703 { 00:16:26.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.703 "dma_device_type": 2 00:16:26.703 } 00:16:26.703 ], 00:16:26.703 "driver_specific": {} 00:16:26.703 } 00:16:26.703 ] 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.703 [2024-11-25 15:43:25.305155] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:26.703 [2024-11-25 15:43:25.305251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:26.703 [2024-11-25 15:43:25.305294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.703 [2024-11-25 15:43:25.307036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:26.703 [2024-11-25 15:43:25.307122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.703 "name": "Existed_Raid", 00:16:26.703 "uuid": "cfe8e287-3c15-4f92-9493-705e5f28f6e2", 00:16:26.703 "strip_size_kb": 64, 00:16:26.703 "state": "configuring", 00:16:26.703 "raid_level": "raid5f", 00:16:26.703 "superblock": true, 00:16:26.703 "num_base_bdevs": 4, 00:16:26.703 "num_base_bdevs_discovered": 3, 00:16:26.703 "num_base_bdevs_operational": 4, 00:16:26.703 "base_bdevs_list": [ 00:16:26.703 { 00:16:26.703 "name": "BaseBdev1", 00:16:26.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.703 "is_configured": false, 00:16:26.703 "data_offset": 0, 00:16:26.703 "data_size": 0 00:16:26.703 }, 00:16:26.703 { 00:16:26.703 "name": "BaseBdev2", 00:16:26.703 "uuid": "b436c33d-4554-40ea-956a-2d0dc1990351", 00:16:26.703 "is_configured": true, 00:16:26.703 "data_offset": 2048, 00:16:26.703 "data_size": 63488 00:16:26.703 }, 00:16:26.703 { 00:16:26.703 "name": "BaseBdev3", 00:16:26.703 "uuid": "2e982ddd-2991-497d-a55e-d38791fac74b", 00:16:26.703 "is_configured": true, 00:16:26.703 "data_offset": 2048, 00:16:26.703 "data_size": 63488 00:16:26.703 }, 00:16:26.703 { 00:16:26.703 "name": "BaseBdev4", 00:16:26.703 "uuid": "bfd029a8-e122-4545-9b5f-40114d5e27f5", 00:16:26.703 "is_configured": true, 00:16:26.703 "data_offset": 2048, 00:16:26.703 "data_size": 63488 00:16:26.703 } 00:16:26.703 ] 00:16:26.703 }' 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.703 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.273 [2024-11-25 15:43:25.768407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.273 "name": "Existed_Raid", 00:16:27.273 "uuid": "cfe8e287-3c15-4f92-9493-705e5f28f6e2", 00:16:27.273 "strip_size_kb": 64, 00:16:27.273 "state": "configuring", 00:16:27.273 "raid_level": "raid5f", 00:16:27.273 "superblock": true, 00:16:27.273 "num_base_bdevs": 4, 00:16:27.273 "num_base_bdevs_discovered": 2, 00:16:27.273 "num_base_bdevs_operational": 4, 00:16:27.273 "base_bdevs_list": [ 00:16:27.273 { 00:16:27.273 "name": "BaseBdev1", 00:16:27.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.273 "is_configured": false, 00:16:27.273 "data_offset": 0, 00:16:27.273 "data_size": 0 00:16:27.273 }, 00:16:27.273 { 00:16:27.273 "name": null, 00:16:27.273 "uuid": "b436c33d-4554-40ea-956a-2d0dc1990351", 00:16:27.273 "is_configured": false, 00:16:27.273 "data_offset": 0, 00:16:27.273 "data_size": 63488 00:16:27.273 }, 00:16:27.273 { 00:16:27.273 "name": "BaseBdev3", 00:16:27.273 "uuid": "2e982ddd-2991-497d-a55e-d38791fac74b", 00:16:27.273 "is_configured": true, 00:16:27.273 "data_offset": 2048, 00:16:27.273 "data_size": 63488 00:16:27.273 }, 00:16:27.273 { 00:16:27.273 "name": "BaseBdev4", 00:16:27.273 "uuid": "bfd029a8-e122-4545-9b5f-40114d5e27f5", 00:16:27.273 "is_configured": true, 00:16:27.273 "data_offset": 2048, 00:16:27.273 "data_size": 63488 00:16:27.273 } 00:16:27.273 ] 00:16:27.273 }' 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.273 15:43:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.842 [2024-11-25 15:43:26.323704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:27.842 BaseBdev1 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.842 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.842 [ 00:16:27.842 { 00:16:27.842 "name": "BaseBdev1", 00:16:27.842 "aliases": [ 00:16:27.842 "361b18cf-b8c7-4234-b283-cff3b3adf3ce" 00:16:27.842 ], 00:16:27.842 "product_name": "Malloc disk", 00:16:27.842 "block_size": 512, 00:16:27.842 "num_blocks": 65536, 00:16:27.842 "uuid": "361b18cf-b8c7-4234-b283-cff3b3adf3ce", 00:16:27.842 "assigned_rate_limits": { 00:16:27.842 "rw_ios_per_sec": 0, 00:16:27.842 "rw_mbytes_per_sec": 0, 00:16:27.842 "r_mbytes_per_sec": 0, 00:16:27.842 "w_mbytes_per_sec": 0 00:16:27.843 }, 00:16:27.843 "claimed": true, 00:16:27.843 "claim_type": "exclusive_write", 00:16:27.843 "zoned": false, 00:16:27.843 "supported_io_types": { 00:16:27.843 "read": true, 00:16:27.843 "write": true, 00:16:27.843 "unmap": true, 00:16:27.843 "flush": true, 00:16:27.843 "reset": true, 00:16:27.843 "nvme_admin": false, 00:16:27.843 "nvme_io": false, 00:16:27.843 "nvme_io_md": false, 00:16:27.843 "write_zeroes": true, 00:16:27.843 "zcopy": true, 00:16:27.843 "get_zone_info": false, 00:16:27.843 "zone_management": false, 00:16:27.843 "zone_append": false, 00:16:27.843 "compare": false, 00:16:27.843 "compare_and_write": false, 00:16:27.843 "abort": true, 00:16:27.843 "seek_hole": false, 00:16:27.843 "seek_data": false, 00:16:27.843 "copy": true, 00:16:27.843 "nvme_iov_md": false 00:16:27.843 }, 00:16:27.843 "memory_domains": [ 00:16:27.843 { 00:16:27.843 "dma_device_id": "system", 00:16:27.843 "dma_device_type": 1 00:16:27.843 }, 00:16:27.843 { 00:16:27.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.843 "dma_device_type": 2 00:16:27.843 } 00:16:27.843 ], 00:16:27.843 "driver_specific": {} 00:16:27.843 } 00:16:27.843 ] 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.843 "name": "Existed_Raid", 00:16:27.843 "uuid": "cfe8e287-3c15-4f92-9493-705e5f28f6e2", 00:16:27.843 "strip_size_kb": 64, 00:16:27.843 "state": "configuring", 00:16:27.843 "raid_level": "raid5f", 00:16:27.843 "superblock": true, 00:16:27.843 "num_base_bdevs": 4, 00:16:27.843 "num_base_bdevs_discovered": 3, 00:16:27.843 "num_base_bdevs_operational": 4, 00:16:27.843 "base_bdevs_list": [ 00:16:27.843 { 00:16:27.843 "name": "BaseBdev1", 00:16:27.843 "uuid": "361b18cf-b8c7-4234-b283-cff3b3adf3ce", 00:16:27.843 "is_configured": true, 00:16:27.843 "data_offset": 2048, 00:16:27.843 "data_size": 63488 00:16:27.843 }, 00:16:27.843 { 00:16:27.843 "name": null, 00:16:27.843 "uuid": "b436c33d-4554-40ea-956a-2d0dc1990351", 00:16:27.843 "is_configured": false, 00:16:27.843 "data_offset": 0, 00:16:27.843 "data_size": 63488 00:16:27.843 }, 00:16:27.843 { 00:16:27.843 "name": "BaseBdev3", 00:16:27.843 "uuid": "2e982ddd-2991-497d-a55e-d38791fac74b", 00:16:27.843 "is_configured": true, 00:16:27.843 "data_offset": 2048, 00:16:27.843 "data_size": 63488 00:16:27.843 }, 00:16:27.843 { 00:16:27.843 "name": "BaseBdev4", 00:16:27.843 "uuid": "bfd029a8-e122-4545-9b5f-40114d5e27f5", 00:16:27.843 "is_configured": true, 00:16:27.843 "data_offset": 2048, 00:16:27.843 "data_size": 63488 00:16:27.843 } 00:16:27.843 ] 00:16:27.843 }' 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.843 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.102 [2024-11-25 15:43:26.771039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.102 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.362 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.362 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.362 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.362 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.362 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.362 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.362 "name": "Existed_Raid", 00:16:28.362 "uuid": "cfe8e287-3c15-4f92-9493-705e5f28f6e2", 00:16:28.362 "strip_size_kb": 64, 00:16:28.362 "state": "configuring", 00:16:28.362 "raid_level": "raid5f", 00:16:28.362 "superblock": true, 00:16:28.362 "num_base_bdevs": 4, 00:16:28.362 "num_base_bdevs_discovered": 2, 00:16:28.362 "num_base_bdevs_operational": 4, 00:16:28.362 "base_bdevs_list": [ 00:16:28.362 { 00:16:28.362 "name": "BaseBdev1", 00:16:28.362 "uuid": "361b18cf-b8c7-4234-b283-cff3b3adf3ce", 00:16:28.362 "is_configured": true, 00:16:28.362 "data_offset": 2048, 00:16:28.362 "data_size": 63488 00:16:28.362 }, 00:16:28.362 { 00:16:28.362 "name": null, 00:16:28.362 "uuid": "b436c33d-4554-40ea-956a-2d0dc1990351", 00:16:28.362 "is_configured": false, 00:16:28.362 "data_offset": 0, 00:16:28.362 "data_size": 63488 00:16:28.362 }, 00:16:28.362 { 00:16:28.362 "name": null, 00:16:28.362 "uuid": "2e982ddd-2991-497d-a55e-d38791fac74b", 00:16:28.362 "is_configured": false, 00:16:28.362 "data_offset": 0, 00:16:28.362 "data_size": 63488 00:16:28.362 }, 00:16:28.362 { 00:16:28.362 "name": "BaseBdev4", 00:16:28.362 "uuid": "bfd029a8-e122-4545-9b5f-40114d5e27f5", 00:16:28.362 "is_configured": true, 00:16:28.362 "data_offset": 2048, 00:16:28.362 "data_size": 63488 00:16:28.362 } 00:16:28.362 ] 00:16:28.362 }' 00:16:28.362 15:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.362 15:43:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.621 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:28.621 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.621 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.621 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.621 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.880 [2024-11-25 15:43:27.310113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.880 "name": "Existed_Raid", 00:16:28.880 "uuid": "cfe8e287-3c15-4f92-9493-705e5f28f6e2", 00:16:28.880 "strip_size_kb": 64, 00:16:28.880 "state": "configuring", 00:16:28.880 "raid_level": "raid5f", 00:16:28.880 "superblock": true, 00:16:28.880 "num_base_bdevs": 4, 00:16:28.880 "num_base_bdevs_discovered": 3, 00:16:28.880 "num_base_bdevs_operational": 4, 00:16:28.880 "base_bdevs_list": [ 00:16:28.880 { 00:16:28.880 "name": "BaseBdev1", 00:16:28.880 "uuid": "361b18cf-b8c7-4234-b283-cff3b3adf3ce", 00:16:28.880 "is_configured": true, 00:16:28.880 "data_offset": 2048, 00:16:28.880 "data_size": 63488 00:16:28.880 }, 00:16:28.880 { 00:16:28.880 "name": null, 00:16:28.880 "uuid": "b436c33d-4554-40ea-956a-2d0dc1990351", 00:16:28.880 "is_configured": false, 00:16:28.880 "data_offset": 0, 00:16:28.880 "data_size": 63488 00:16:28.880 }, 00:16:28.880 { 00:16:28.880 "name": "BaseBdev3", 00:16:28.880 "uuid": "2e982ddd-2991-497d-a55e-d38791fac74b", 00:16:28.880 "is_configured": true, 00:16:28.880 "data_offset": 2048, 00:16:28.880 "data_size": 63488 00:16:28.880 }, 00:16:28.880 { 00:16:28.880 "name": "BaseBdev4", 00:16:28.880 "uuid": "bfd029a8-e122-4545-9b5f-40114d5e27f5", 00:16:28.880 "is_configured": true, 00:16:28.880 "data_offset": 2048, 00:16:28.880 "data_size": 63488 00:16:28.880 } 00:16:28.880 ] 00:16:28.880 }' 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.880 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.138 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.138 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.138 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.139 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:29.139 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.139 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:29.139 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:29.139 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.139 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.139 [2024-11-25 15:43:27.769330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:29.397 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.397 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:29.397 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.397 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.397 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.397 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.397 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.397 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.397 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.397 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.397 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.397 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.397 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.397 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.397 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.397 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.397 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.397 "name": "Existed_Raid", 00:16:29.397 "uuid": "cfe8e287-3c15-4f92-9493-705e5f28f6e2", 00:16:29.397 "strip_size_kb": 64, 00:16:29.397 "state": "configuring", 00:16:29.397 "raid_level": "raid5f", 00:16:29.397 "superblock": true, 00:16:29.397 "num_base_bdevs": 4, 00:16:29.397 "num_base_bdevs_discovered": 2, 00:16:29.397 "num_base_bdevs_operational": 4, 00:16:29.397 "base_bdevs_list": [ 00:16:29.397 { 00:16:29.397 "name": null, 00:16:29.397 "uuid": "361b18cf-b8c7-4234-b283-cff3b3adf3ce", 00:16:29.397 "is_configured": false, 00:16:29.397 "data_offset": 0, 00:16:29.397 "data_size": 63488 00:16:29.397 }, 00:16:29.397 { 00:16:29.397 "name": null, 00:16:29.397 "uuid": "b436c33d-4554-40ea-956a-2d0dc1990351", 00:16:29.397 "is_configured": false, 00:16:29.397 "data_offset": 0, 00:16:29.397 "data_size": 63488 00:16:29.397 }, 00:16:29.397 { 00:16:29.397 "name": "BaseBdev3", 00:16:29.397 "uuid": "2e982ddd-2991-497d-a55e-d38791fac74b", 00:16:29.397 "is_configured": true, 00:16:29.397 "data_offset": 2048, 00:16:29.397 "data_size": 63488 00:16:29.397 }, 00:16:29.397 { 00:16:29.397 "name": "BaseBdev4", 00:16:29.397 "uuid": "bfd029a8-e122-4545-9b5f-40114d5e27f5", 00:16:29.397 "is_configured": true, 00:16:29.397 "data_offset": 2048, 00:16:29.397 "data_size": 63488 00:16:29.397 } 00:16:29.397 ] 00:16:29.397 }' 00:16:29.397 15:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.397 15:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.657 [2024-11-25 15:43:28.321076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.657 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.917 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.917 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.917 "name": "Existed_Raid", 00:16:29.917 "uuid": "cfe8e287-3c15-4f92-9493-705e5f28f6e2", 00:16:29.917 "strip_size_kb": 64, 00:16:29.917 "state": "configuring", 00:16:29.917 "raid_level": "raid5f", 00:16:29.917 "superblock": true, 00:16:29.917 "num_base_bdevs": 4, 00:16:29.917 "num_base_bdevs_discovered": 3, 00:16:29.917 "num_base_bdevs_operational": 4, 00:16:29.917 "base_bdevs_list": [ 00:16:29.917 { 00:16:29.917 "name": null, 00:16:29.917 "uuid": "361b18cf-b8c7-4234-b283-cff3b3adf3ce", 00:16:29.917 "is_configured": false, 00:16:29.917 "data_offset": 0, 00:16:29.917 "data_size": 63488 00:16:29.917 }, 00:16:29.917 { 00:16:29.917 "name": "BaseBdev2", 00:16:29.917 "uuid": "b436c33d-4554-40ea-956a-2d0dc1990351", 00:16:29.917 "is_configured": true, 00:16:29.917 "data_offset": 2048, 00:16:29.917 "data_size": 63488 00:16:29.917 }, 00:16:29.917 { 00:16:29.917 "name": "BaseBdev3", 00:16:29.917 "uuid": "2e982ddd-2991-497d-a55e-d38791fac74b", 00:16:29.917 "is_configured": true, 00:16:29.917 "data_offset": 2048, 00:16:29.917 "data_size": 63488 00:16:29.917 }, 00:16:29.917 { 00:16:29.917 "name": "BaseBdev4", 00:16:29.917 "uuid": "bfd029a8-e122-4545-9b5f-40114d5e27f5", 00:16:29.917 "is_configured": true, 00:16:29.917 "data_offset": 2048, 00:16:29.917 "data_size": 63488 00:16:29.917 } 00:16:29.917 ] 00:16:29.917 }' 00:16:29.917 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.917 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.178 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.178 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.178 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.178 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:30.178 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.178 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:30.178 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:30.178 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.178 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.178 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.178 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.438 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 361b18cf-b8c7-4234-b283-cff3b3adf3ce 00:16:30.438 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.438 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.438 [2024-11-25 15:43:28.898906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:30.438 [2024-11-25 15:43:28.899256] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:30.438 [2024-11-25 15:43:28.899313] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:30.438 [2024-11-25 15:43:28.899578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:30.438 NewBaseBdev 00:16:30.438 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.438 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:30.438 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:30.438 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:30.438 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:30.438 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:30.438 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:30.438 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:30.438 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.438 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.438 [2024-11-25 15:43:28.906780] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:30.439 [2024-11-25 15:43:28.906836] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:30.439 [2024-11-25 15:43:28.907005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.439 [ 00:16:30.439 { 00:16:30.439 "name": "NewBaseBdev", 00:16:30.439 "aliases": [ 00:16:30.439 "361b18cf-b8c7-4234-b283-cff3b3adf3ce" 00:16:30.439 ], 00:16:30.439 "product_name": "Malloc disk", 00:16:30.439 "block_size": 512, 00:16:30.439 "num_blocks": 65536, 00:16:30.439 "uuid": "361b18cf-b8c7-4234-b283-cff3b3adf3ce", 00:16:30.439 "assigned_rate_limits": { 00:16:30.439 "rw_ios_per_sec": 0, 00:16:30.439 "rw_mbytes_per_sec": 0, 00:16:30.439 "r_mbytes_per_sec": 0, 00:16:30.439 "w_mbytes_per_sec": 0 00:16:30.439 }, 00:16:30.439 "claimed": true, 00:16:30.439 "claim_type": "exclusive_write", 00:16:30.439 "zoned": false, 00:16:30.439 "supported_io_types": { 00:16:30.439 "read": true, 00:16:30.439 "write": true, 00:16:30.439 "unmap": true, 00:16:30.439 "flush": true, 00:16:30.439 "reset": true, 00:16:30.439 "nvme_admin": false, 00:16:30.439 "nvme_io": false, 00:16:30.439 "nvme_io_md": false, 00:16:30.439 "write_zeroes": true, 00:16:30.439 "zcopy": true, 00:16:30.439 "get_zone_info": false, 00:16:30.439 "zone_management": false, 00:16:30.439 "zone_append": false, 00:16:30.439 "compare": false, 00:16:30.439 "compare_and_write": false, 00:16:30.439 "abort": true, 00:16:30.439 "seek_hole": false, 00:16:30.439 "seek_data": false, 00:16:30.439 "copy": true, 00:16:30.439 "nvme_iov_md": false 00:16:30.439 }, 00:16:30.439 "memory_domains": [ 00:16:30.439 { 00:16:30.439 "dma_device_id": "system", 00:16:30.439 "dma_device_type": 1 00:16:30.439 }, 00:16:30.439 { 00:16:30.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.439 "dma_device_type": 2 00:16:30.439 } 00:16:30.439 ], 00:16:30.439 "driver_specific": {} 00:16:30.439 } 00:16:30.439 ] 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.439 "name": "Existed_Raid", 00:16:30.439 "uuid": "cfe8e287-3c15-4f92-9493-705e5f28f6e2", 00:16:30.439 "strip_size_kb": 64, 00:16:30.439 "state": "online", 00:16:30.439 "raid_level": "raid5f", 00:16:30.439 "superblock": true, 00:16:30.439 "num_base_bdevs": 4, 00:16:30.439 "num_base_bdevs_discovered": 4, 00:16:30.439 "num_base_bdevs_operational": 4, 00:16:30.439 "base_bdevs_list": [ 00:16:30.439 { 00:16:30.439 "name": "NewBaseBdev", 00:16:30.439 "uuid": "361b18cf-b8c7-4234-b283-cff3b3adf3ce", 00:16:30.439 "is_configured": true, 00:16:30.439 "data_offset": 2048, 00:16:30.439 "data_size": 63488 00:16:30.439 }, 00:16:30.439 { 00:16:30.439 "name": "BaseBdev2", 00:16:30.439 "uuid": "b436c33d-4554-40ea-956a-2d0dc1990351", 00:16:30.439 "is_configured": true, 00:16:30.439 "data_offset": 2048, 00:16:30.439 "data_size": 63488 00:16:30.439 }, 00:16:30.439 { 00:16:30.439 "name": "BaseBdev3", 00:16:30.439 "uuid": "2e982ddd-2991-497d-a55e-d38791fac74b", 00:16:30.439 "is_configured": true, 00:16:30.439 "data_offset": 2048, 00:16:30.439 "data_size": 63488 00:16:30.439 }, 00:16:30.439 { 00:16:30.439 "name": "BaseBdev4", 00:16:30.439 "uuid": "bfd029a8-e122-4545-9b5f-40114d5e27f5", 00:16:30.439 "is_configured": true, 00:16:30.439 "data_offset": 2048, 00:16:30.439 "data_size": 63488 00:16:30.439 } 00:16:30.439 ] 00:16:30.439 }' 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.439 15:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.701 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:30.701 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:30.701 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:30.701 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:30.701 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:30.701 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:30.964 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:30.964 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:30.964 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.964 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.964 [2024-11-25 15:43:29.390487] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.964 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.964 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:30.964 "name": "Existed_Raid", 00:16:30.964 "aliases": [ 00:16:30.964 "cfe8e287-3c15-4f92-9493-705e5f28f6e2" 00:16:30.964 ], 00:16:30.964 "product_name": "Raid Volume", 00:16:30.964 "block_size": 512, 00:16:30.964 "num_blocks": 190464, 00:16:30.964 "uuid": "cfe8e287-3c15-4f92-9493-705e5f28f6e2", 00:16:30.964 "assigned_rate_limits": { 00:16:30.964 "rw_ios_per_sec": 0, 00:16:30.964 "rw_mbytes_per_sec": 0, 00:16:30.964 "r_mbytes_per_sec": 0, 00:16:30.964 "w_mbytes_per_sec": 0 00:16:30.964 }, 00:16:30.964 "claimed": false, 00:16:30.964 "zoned": false, 00:16:30.964 "supported_io_types": { 00:16:30.964 "read": true, 00:16:30.964 "write": true, 00:16:30.964 "unmap": false, 00:16:30.964 "flush": false, 00:16:30.964 "reset": true, 00:16:30.964 "nvme_admin": false, 00:16:30.964 "nvme_io": false, 00:16:30.964 "nvme_io_md": false, 00:16:30.964 "write_zeroes": true, 00:16:30.964 "zcopy": false, 00:16:30.964 "get_zone_info": false, 00:16:30.964 "zone_management": false, 00:16:30.964 "zone_append": false, 00:16:30.964 "compare": false, 00:16:30.964 "compare_and_write": false, 00:16:30.964 "abort": false, 00:16:30.964 "seek_hole": false, 00:16:30.964 "seek_data": false, 00:16:30.964 "copy": false, 00:16:30.964 "nvme_iov_md": false 00:16:30.964 }, 00:16:30.964 "driver_specific": { 00:16:30.964 "raid": { 00:16:30.964 "uuid": "cfe8e287-3c15-4f92-9493-705e5f28f6e2", 00:16:30.964 "strip_size_kb": 64, 00:16:30.964 "state": "online", 00:16:30.964 "raid_level": "raid5f", 00:16:30.964 "superblock": true, 00:16:30.964 "num_base_bdevs": 4, 00:16:30.964 "num_base_bdevs_discovered": 4, 00:16:30.964 "num_base_bdevs_operational": 4, 00:16:30.964 "base_bdevs_list": [ 00:16:30.964 { 00:16:30.964 "name": "NewBaseBdev", 00:16:30.964 "uuid": "361b18cf-b8c7-4234-b283-cff3b3adf3ce", 00:16:30.964 "is_configured": true, 00:16:30.964 "data_offset": 2048, 00:16:30.964 "data_size": 63488 00:16:30.964 }, 00:16:30.964 { 00:16:30.964 "name": "BaseBdev2", 00:16:30.964 "uuid": "b436c33d-4554-40ea-956a-2d0dc1990351", 00:16:30.964 "is_configured": true, 00:16:30.964 "data_offset": 2048, 00:16:30.964 "data_size": 63488 00:16:30.964 }, 00:16:30.964 { 00:16:30.964 "name": "BaseBdev3", 00:16:30.964 "uuid": "2e982ddd-2991-497d-a55e-d38791fac74b", 00:16:30.964 "is_configured": true, 00:16:30.964 "data_offset": 2048, 00:16:30.964 "data_size": 63488 00:16:30.964 }, 00:16:30.964 { 00:16:30.964 "name": "BaseBdev4", 00:16:30.964 "uuid": "bfd029a8-e122-4545-9b5f-40114d5e27f5", 00:16:30.964 "is_configured": true, 00:16:30.964 "data_offset": 2048, 00:16:30.964 "data_size": 63488 00:16:30.964 } 00:16:30.964 ] 00:16:30.964 } 00:16:30.964 } 00:16:30.964 }' 00:16:30.964 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:30.964 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:30.964 BaseBdev2 00:16:30.964 BaseBdev3 00:16:30.964 BaseBdev4' 00:16:30.964 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.964 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:30.964 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.964 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.964 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:30.964 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.964 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.965 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.965 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.965 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.965 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.965 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:30.965 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.965 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.965 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.965 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.965 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.965 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.965 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.965 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:30.965 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.965 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.965 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.965 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.224 [2024-11-25 15:43:29.705711] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:31.224 [2024-11-25 15:43:29.705740] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.224 [2024-11-25 15:43:29.705818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.224 [2024-11-25 15:43:29.706116] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.224 [2024-11-25 15:43:29.706128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83050 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83050 ']' 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83050 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83050 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:31.224 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83050' 00:16:31.224 killing process with pid 83050 00:16:31.225 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83050 00:16:31.225 [2024-11-25 15:43:29.748610] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:31.225 15:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83050 00:16:31.484 [2024-11-25 15:43:30.121540] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:32.865 15:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:32.865 00:16:32.865 real 0m11.242s 00:16:32.865 user 0m17.915s 00:16:32.865 sys 0m1.976s 00:16:32.865 15:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.865 15:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.865 ************************************ 00:16:32.865 END TEST raid5f_state_function_test_sb 00:16:32.865 ************************************ 00:16:32.865 15:43:31 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:32.865 15:43:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:32.865 15:43:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:32.865 15:43:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:32.865 ************************************ 00:16:32.865 START TEST raid5f_superblock_test 00:16:32.865 ************************************ 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83715 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83715 00:16:32.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83715 ']' 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.865 15:43:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.865 [2024-11-25 15:43:31.354787] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:16:32.865 [2024-11-25 15:43:31.354995] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83715 ] 00:16:32.865 [2024-11-25 15:43:31.528291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.124 [2024-11-25 15:43:31.638361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.384 [2024-11-25 15:43:31.827014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.384 [2024-11-25 15:43:31.827101] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.643 malloc1 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.643 [2024-11-25 15:43:32.240787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:33.643 [2024-11-25 15:43:32.240906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.643 [2024-11-25 15:43:32.240948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:33.643 [2024-11-25 15:43:32.240976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.643 [2024-11-25 15:43:32.243025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.643 [2024-11-25 15:43:32.243102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:33.643 pt1 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.643 malloc2 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.643 [2024-11-25 15:43:32.291155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:33.643 [2024-11-25 15:43:32.291242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.643 [2024-11-25 15:43:32.291294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:33.643 [2024-11-25 15:43:32.291330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.643 [2024-11-25 15:43:32.293322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.643 [2024-11-25 15:43:32.293397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:33.643 pt2 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:33.643 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:33.644 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.644 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.904 malloc3 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.904 [2024-11-25 15:43:32.383869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:33.904 [2024-11-25 15:43:32.383971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.904 [2024-11-25 15:43:32.384009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:33.904 [2024-11-25 15:43:32.384058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.904 [2024-11-25 15:43:32.386081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.904 [2024-11-25 15:43:32.386159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:33.904 pt3 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.904 malloc4 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.904 [2024-11-25 15:43:32.440861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:33.904 [2024-11-25 15:43:32.440957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.904 [2024-11-25 15:43:32.440991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:33.904 [2024-11-25 15:43:32.441032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.904 [2024-11-25 15:43:32.442995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.904 [2024-11-25 15:43:32.443081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:33.904 pt4 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.904 [2024-11-25 15:43:32.452871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:33.904 [2024-11-25 15:43:32.454608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:33.904 [2024-11-25 15:43:32.454669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:33.904 [2024-11-25 15:43:32.454728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:33.904 [2024-11-25 15:43:32.454917] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:33.904 [2024-11-25 15:43:32.454932] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:33.904 [2024-11-25 15:43:32.455175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:33.904 [2024-11-25 15:43:32.462277] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:33.904 [2024-11-25 15:43:32.462298] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:33.904 [2024-11-25 15:43:32.462487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.904 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.904 "name": "raid_bdev1", 00:16:33.904 "uuid": "5ce43d5a-88f9-4713-9033-ad80241d1719", 00:16:33.904 "strip_size_kb": 64, 00:16:33.904 "state": "online", 00:16:33.904 "raid_level": "raid5f", 00:16:33.904 "superblock": true, 00:16:33.904 "num_base_bdevs": 4, 00:16:33.904 "num_base_bdevs_discovered": 4, 00:16:33.904 "num_base_bdevs_operational": 4, 00:16:33.904 "base_bdevs_list": [ 00:16:33.904 { 00:16:33.904 "name": "pt1", 00:16:33.904 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:33.904 "is_configured": true, 00:16:33.904 "data_offset": 2048, 00:16:33.904 "data_size": 63488 00:16:33.904 }, 00:16:33.904 { 00:16:33.904 "name": "pt2", 00:16:33.904 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.904 "is_configured": true, 00:16:33.904 "data_offset": 2048, 00:16:33.904 "data_size": 63488 00:16:33.904 }, 00:16:33.904 { 00:16:33.904 "name": "pt3", 00:16:33.904 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:33.904 "is_configured": true, 00:16:33.904 "data_offset": 2048, 00:16:33.904 "data_size": 63488 00:16:33.904 }, 00:16:33.904 { 00:16:33.904 "name": "pt4", 00:16:33.904 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:33.905 "is_configured": true, 00:16:33.905 "data_offset": 2048, 00:16:33.905 "data_size": 63488 00:16:33.905 } 00:16:33.905 ] 00:16:33.905 }' 00:16:33.905 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.905 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.474 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:34.474 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:34.474 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:34.474 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:34.474 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:34.474 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:34.474 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:34.474 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.474 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:34.474 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.474 [2024-11-25 15:43:32.870152] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.474 15:43:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.474 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:34.474 "name": "raid_bdev1", 00:16:34.474 "aliases": [ 00:16:34.474 "5ce43d5a-88f9-4713-9033-ad80241d1719" 00:16:34.474 ], 00:16:34.474 "product_name": "Raid Volume", 00:16:34.474 "block_size": 512, 00:16:34.474 "num_blocks": 190464, 00:16:34.474 "uuid": "5ce43d5a-88f9-4713-9033-ad80241d1719", 00:16:34.474 "assigned_rate_limits": { 00:16:34.474 "rw_ios_per_sec": 0, 00:16:34.474 "rw_mbytes_per_sec": 0, 00:16:34.474 "r_mbytes_per_sec": 0, 00:16:34.474 "w_mbytes_per_sec": 0 00:16:34.474 }, 00:16:34.474 "claimed": false, 00:16:34.474 "zoned": false, 00:16:34.474 "supported_io_types": { 00:16:34.474 "read": true, 00:16:34.474 "write": true, 00:16:34.474 "unmap": false, 00:16:34.474 "flush": false, 00:16:34.474 "reset": true, 00:16:34.474 "nvme_admin": false, 00:16:34.474 "nvme_io": false, 00:16:34.474 "nvme_io_md": false, 00:16:34.474 "write_zeroes": true, 00:16:34.474 "zcopy": false, 00:16:34.474 "get_zone_info": false, 00:16:34.474 "zone_management": false, 00:16:34.474 "zone_append": false, 00:16:34.474 "compare": false, 00:16:34.474 "compare_and_write": false, 00:16:34.474 "abort": false, 00:16:34.474 "seek_hole": false, 00:16:34.474 "seek_data": false, 00:16:34.474 "copy": false, 00:16:34.474 "nvme_iov_md": false 00:16:34.474 }, 00:16:34.474 "driver_specific": { 00:16:34.474 "raid": { 00:16:34.474 "uuid": "5ce43d5a-88f9-4713-9033-ad80241d1719", 00:16:34.474 "strip_size_kb": 64, 00:16:34.474 "state": "online", 00:16:34.474 "raid_level": "raid5f", 00:16:34.474 "superblock": true, 00:16:34.474 "num_base_bdevs": 4, 00:16:34.474 "num_base_bdevs_discovered": 4, 00:16:34.474 "num_base_bdevs_operational": 4, 00:16:34.474 "base_bdevs_list": [ 00:16:34.474 { 00:16:34.474 "name": "pt1", 00:16:34.474 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:34.474 "is_configured": true, 00:16:34.474 "data_offset": 2048, 00:16:34.474 "data_size": 63488 00:16:34.474 }, 00:16:34.474 { 00:16:34.474 "name": "pt2", 00:16:34.474 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:34.474 "is_configured": true, 00:16:34.474 "data_offset": 2048, 00:16:34.474 "data_size": 63488 00:16:34.474 }, 00:16:34.474 { 00:16:34.474 "name": "pt3", 00:16:34.474 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:34.474 "is_configured": true, 00:16:34.474 "data_offset": 2048, 00:16:34.474 "data_size": 63488 00:16:34.474 }, 00:16:34.474 { 00:16:34.474 "name": "pt4", 00:16:34.474 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:34.474 "is_configured": true, 00:16:34.474 "data_offset": 2048, 00:16:34.474 "data_size": 63488 00:16:34.474 } 00:16:34.474 ] 00:16:34.474 } 00:16:34.474 } 00:16:34.474 }' 00:16:34.474 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:34.475 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:34.475 pt2 00:16:34.475 pt3 00:16:34.475 pt4' 00:16:34.475 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.475 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:34.475 15:43:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.475 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:34.735 [2024-11-25 15:43:33.189530] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5ce43d5a-88f9-4713-9033-ad80241d1719 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5ce43d5a-88f9-4713-9033-ad80241d1719 ']' 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.735 [2024-11-25 15:43:33.217329] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:34.735 [2024-11-25 15:43:33.217390] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:34.735 [2024-11-25 15:43:33.217478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.735 [2024-11-25 15:43:33.217588] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.735 [2024-11-25 15:43:33.217688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.735 [2024-11-25 15:43:33.381106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:34.735 [2024-11-25 15:43:33.382854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:34.735 [2024-11-25 15:43:33.382955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:34.735 [2024-11-25 15:43:33.383005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:34.735 [2024-11-25 15:43:33.383095] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:34.735 [2024-11-25 15:43:33.383171] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:34.735 [2024-11-25 15:43:33.383227] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:34.735 [2024-11-25 15:43:33.383293] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:34.735 [2024-11-25 15:43:33.383307] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:34.735 [2024-11-25 15:43:33.383327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:34.735 request: 00:16:34.735 { 00:16:34.735 "name": "raid_bdev1", 00:16:34.735 "raid_level": "raid5f", 00:16:34.735 "base_bdevs": [ 00:16:34.735 "malloc1", 00:16:34.735 "malloc2", 00:16:34.735 "malloc3", 00:16:34.735 "malloc4" 00:16:34.735 ], 00:16:34.735 "strip_size_kb": 64, 00:16:34.735 "superblock": false, 00:16:34.735 "method": "bdev_raid_create", 00:16:34.735 "req_id": 1 00:16:34.735 } 00:16:34.735 Got JSON-RPC error response 00:16:34.735 response: 00:16:34.735 { 00:16:34.735 "code": -17, 00:16:34.735 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:34.735 } 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:34.735 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.995 [2024-11-25 15:43:33.448944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:34.995 [2024-11-25 15:43:33.449036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.995 [2024-11-25 15:43:33.449066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:34.995 [2024-11-25 15:43:33.449095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.995 [2024-11-25 15:43:33.451118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.995 [2024-11-25 15:43:33.451185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:34.995 [2024-11-25 15:43:33.451269] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:34.995 [2024-11-25 15:43:33.451375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:34.995 pt1 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.995 "name": "raid_bdev1", 00:16:34.995 "uuid": "5ce43d5a-88f9-4713-9033-ad80241d1719", 00:16:34.995 "strip_size_kb": 64, 00:16:34.995 "state": "configuring", 00:16:34.995 "raid_level": "raid5f", 00:16:34.995 "superblock": true, 00:16:34.995 "num_base_bdevs": 4, 00:16:34.995 "num_base_bdevs_discovered": 1, 00:16:34.995 "num_base_bdevs_operational": 4, 00:16:34.995 "base_bdevs_list": [ 00:16:34.995 { 00:16:34.995 "name": "pt1", 00:16:34.995 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:34.995 "is_configured": true, 00:16:34.995 "data_offset": 2048, 00:16:34.995 "data_size": 63488 00:16:34.995 }, 00:16:34.995 { 00:16:34.995 "name": null, 00:16:34.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:34.995 "is_configured": false, 00:16:34.995 "data_offset": 2048, 00:16:34.995 "data_size": 63488 00:16:34.995 }, 00:16:34.995 { 00:16:34.995 "name": null, 00:16:34.995 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:34.995 "is_configured": false, 00:16:34.995 "data_offset": 2048, 00:16:34.995 "data_size": 63488 00:16:34.995 }, 00:16:34.995 { 00:16:34.995 "name": null, 00:16:34.995 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:34.995 "is_configured": false, 00:16:34.995 "data_offset": 2048, 00:16:34.995 "data_size": 63488 00:16:34.995 } 00:16:34.995 ] 00:16:34.995 }' 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.995 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.255 [2024-11-25 15:43:33.880223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:35.255 [2024-11-25 15:43:33.880321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.255 [2024-11-25 15:43:33.880341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:35.255 [2024-11-25 15:43:33.880353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.255 [2024-11-25 15:43:33.880768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.255 [2024-11-25 15:43:33.880788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:35.255 [2024-11-25 15:43:33.880857] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:35.255 [2024-11-25 15:43:33.880880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:35.255 pt2 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.255 [2024-11-25 15:43:33.888219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.255 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.516 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.516 "name": "raid_bdev1", 00:16:35.516 "uuid": "5ce43d5a-88f9-4713-9033-ad80241d1719", 00:16:35.516 "strip_size_kb": 64, 00:16:35.516 "state": "configuring", 00:16:35.516 "raid_level": "raid5f", 00:16:35.516 "superblock": true, 00:16:35.516 "num_base_bdevs": 4, 00:16:35.516 "num_base_bdevs_discovered": 1, 00:16:35.516 "num_base_bdevs_operational": 4, 00:16:35.516 "base_bdevs_list": [ 00:16:35.516 { 00:16:35.516 "name": "pt1", 00:16:35.516 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.516 "is_configured": true, 00:16:35.516 "data_offset": 2048, 00:16:35.516 "data_size": 63488 00:16:35.516 }, 00:16:35.516 { 00:16:35.516 "name": null, 00:16:35.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.516 "is_configured": false, 00:16:35.516 "data_offset": 0, 00:16:35.516 "data_size": 63488 00:16:35.516 }, 00:16:35.516 { 00:16:35.516 "name": null, 00:16:35.516 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:35.516 "is_configured": false, 00:16:35.516 "data_offset": 2048, 00:16:35.516 "data_size": 63488 00:16:35.516 }, 00:16:35.516 { 00:16:35.516 "name": null, 00:16:35.516 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:35.516 "is_configured": false, 00:16:35.516 "data_offset": 2048, 00:16:35.516 "data_size": 63488 00:16:35.516 } 00:16:35.516 ] 00:16:35.516 }' 00:16:35.516 15:43:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.516 15:43:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.776 [2024-11-25 15:43:34.359448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:35.776 [2024-11-25 15:43:34.359566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.776 [2024-11-25 15:43:34.359604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:35.776 [2024-11-25 15:43:34.359633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.776 [2024-11-25 15:43:34.360105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.776 [2024-11-25 15:43:34.360161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:35.776 [2024-11-25 15:43:34.360272] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:35.776 [2024-11-25 15:43:34.360323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:35.776 pt2 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.776 [2024-11-25 15:43:34.371423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:35.776 [2024-11-25 15:43:34.371507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.776 [2024-11-25 15:43:34.371555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:35.776 [2024-11-25 15:43:34.371582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.776 [2024-11-25 15:43:34.371953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.776 [2024-11-25 15:43:34.372014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:35.776 [2024-11-25 15:43:34.372103] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:35.776 [2024-11-25 15:43:34.372148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:35.776 pt3 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.776 [2024-11-25 15:43:34.383406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:35.776 [2024-11-25 15:43:34.383487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.776 [2024-11-25 15:43:34.383520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:35.776 [2024-11-25 15:43:34.383545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.776 [2024-11-25 15:43:34.383919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.776 [2024-11-25 15:43:34.383970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:35.776 [2024-11-25 15:43:34.384060] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:35.776 [2024-11-25 15:43:34.384106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:35.776 [2024-11-25 15:43:34.384277] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:35.776 [2024-11-25 15:43:34.384318] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:35.776 [2024-11-25 15:43:34.384573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:35.776 [2024-11-25 15:43:34.391857] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:35.776 [2024-11-25 15:43:34.391878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:35.776 [2024-11-25 15:43:34.392059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.776 pt4 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.776 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.777 15:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.777 15:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.777 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.777 "name": "raid_bdev1", 00:16:35.777 "uuid": "5ce43d5a-88f9-4713-9033-ad80241d1719", 00:16:35.777 "strip_size_kb": 64, 00:16:35.777 "state": "online", 00:16:35.777 "raid_level": "raid5f", 00:16:35.777 "superblock": true, 00:16:35.777 "num_base_bdevs": 4, 00:16:35.777 "num_base_bdevs_discovered": 4, 00:16:35.777 "num_base_bdevs_operational": 4, 00:16:35.777 "base_bdevs_list": [ 00:16:35.777 { 00:16:35.777 "name": "pt1", 00:16:35.777 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.777 "is_configured": true, 00:16:35.777 "data_offset": 2048, 00:16:35.777 "data_size": 63488 00:16:35.777 }, 00:16:35.777 { 00:16:35.777 "name": "pt2", 00:16:35.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.777 "is_configured": true, 00:16:35.777 "data_offset": 2048, 00:16:35.777 "data_size": 63488 00:16:35.777 }, 00:16:35.777 { 00:16:35.777 "name": "pt3", 00:16:35.777 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:35.777 "is_configured": true, 00:16:35.777 "data_offset": 2048, 00:16:35.777 "data_size": 63488 00:16:35.777 }, 00:16:35.777 { 00:16:35.777 "name": "pt4", 00:16:35.777 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:35.777 "is_configured": true, 00:16:35.777 "data_offset": 2048, 00:16:35.777 "data_size": 63488 00:16:35.777 } 00:16:35.777 ] 00:16:35.777 }' 00:16:35.777 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.777 15:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.347 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:36.347 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:36.347 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:36.347 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:36.347 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:36.347 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:36.347 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:36.347 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:36.347 15:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.347 15:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.347 [2024-11-25 15:43:34.820037] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.347 15:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.347 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:36.347 "name": "raid_bdev1", 00:16:36.347 "aliases": [ 00:16:36.347 "5ce43d5a-88f9-4713-9033-ad80241d1719" 00:16:36.347 ], 00:16:36.347 "product_name": "Raid Volume", 00:16:36.347 "block_size": 512, 00:16:36.347 "num_blocks": 190464, 00:16:36.347 "uuid": "5ce43d5a-88f9-4713-9033-ad80241d1719", 00:16:36.347 "assigned_rate_limits": { 00:16:36.347 "rw_ios_per_sec": 0, 00:16:36.347 "rw_mbytes_per_sec": 0, 00:16:36.347 "r_mbytes_per_sec": 0, 00:16:36.347 "w_mbytes_per_sec": 0 00:16:36.347 }, 00:16:36.347 "claimed": false, 00:16:36.347 "zoned": false, 00:16:36.347 "supported_io_types": { 00:16:36.347 "read": true, 00:16:36.347 "write": true, 00:16:36.347 "unmap": false, 00:16:36.347 "flush": false, 00:16:36.347 "reset": true, 00:16:36.347 "nvme_admin": false, 00:16:36.347 "nvme_io": false, 00:16:36.347 "nvme_io_md": false, 00:16:36.347 "write_zeroes": true, 00:16:36.347 "zcopy": false, 00:16:36.347 "get_zone_info": false, 00:16:36.347 "zone_management": false, 00:16:36.347 "zone_append": false, 00:16:36.347 "compare": false, 00:16:36.347 "compare_and_write": false, 00:16:36.347 "abort": false, 00:16:36.347 "seek_hole": false, 00:16:36.347 "seek_data": false, 00:16:36.347 "copy": false, 00:16:36.347 "nvme_iov_md": false 00:16:36.347 }, 00:16:36.347 "driver_specific": { 00:16:36.347 "raid": { 00:16:36.347 "uuid": "5ce43d5a-88f9-4713-9033-ad80241d1719", 00:16:36.347 "strip_size_kb": 64, 00:16:36.347 "state": "online", 00:16:36.347 "raid_level": "raid5f", 00:16:36.347 "superblock": true, 00:16:36.347 "num_base_bdevs": 4, 00:16:36.347 "num_base_bdevs_discovered": 4, 00:16:36.347 "num_base_bdevs_operational": 4, 00:16:36.347 "base_bdevs_list": [ 00:16:36.347 { 00:16:36.347 "name": "pt1", 00:16:36.347 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:36.347 "is_configured": true, 00:16:36.347 "data_offset": 2048, 00:16:36.347 "data_size": 63488 00:16:36.347 }, 00:16:36.347 { 00:16:36.347 "name": "pt2", 00:16:36.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.347 "is_configured": true, 00:16:36.347 "data_offset": 2048, 00:16:36.347 "data_size": 63488 00:16:36.347 }, 00:16:36.347 { 00:16:36.347 "name": "pt3", 00:16:36.347 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:36.347 "is_configured": true, 00:16:36.347 "data_offset": 2048, 00:16:36.347 "data_size": 63488 00:16:36.347 }, 00:16:36.347 { 00:16:36.347 "name": "pt4", 00:16:36.347 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:36.347 "is_configured": true, 00:16:36.347 "data_offset": 2048, 00:16:36.347 "data_size": 63488 00:16:36.347 } 00:16:36.347 ] 00:16:36.347 } 00:16:36.347 } 00:16:36.347 }' 00:16:36.347 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:36.347 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:36.347 pt2 00:16:36.347 pt3 00:16:36.347 pt4' 00:16:36.347 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.347 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:36.347 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.347 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.347 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:36.348 15:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.348 15:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.348 15:43:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.348 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.348 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.348 15:43:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.348 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:36.348 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.348 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.348 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.608 [2024-11-25 15:43:35.155415] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5ce43d5a-88f9-4713-9033-ad80241d1719 '!=' 5ce43d5a-88f9-4713-9033-ad80241d1719 ']' 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.608 [2024-11-25 15:43:35.195201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.608 "name": "raid_bdev1", 00:16:36.608 "uuid": "5ce43d5a-88f9-4713-9033-ad80241d1719", 00:16:36.608 "strip_size_kb": 64, 00:16:36.608 "state": "online", 00:16:36.608 "raid_level": "raid5f", 00:16:36.608 "superblock": true, 00:16:36.608 "num_base_bdevs": 4, 00:16:36.608 "num_base_bdevs_discovered": 3, 00:16:36.608 "num_base_bdevs_operational": 3, 00:16:36.608 "base_bdevs_list": [ 00:16:36.608 { 00:16:36.608 "name": null, 00:16:36.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.608 "is_configured": false, 00:16:36.608 "data_offset": 0, 00:16:36.608 "data_size": 63488 00:16:36.608 }, 00:16:36.608 { 00:16:36.608 "name": "pt2", 00:16:36.608 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.608 "is_configured": true, 00:16:36.608 "data_offset": 2048, 00:16:36.608 "data_size": 63488 00:16:36.608 }, 00:16:36.608 { 00:16:36.608 "name": "pt3", 00:16:36.608 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:36.608 "is_configured": true, 00:16:36.608 "data_offset": 2048, 00:16:36.608 "data_size": 63488 00:16:36.608 }, 00:16:36.608 { 00:16:36.608 "name": "pt4", 00:16:36.608 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:36.608 "is_configured": true, 00:16:36.608 "data_offset": 2048, 00:16:36.608 "data_size": 63488 00:16:36.608 } 00:16:36.608 ] 00:16:36.608 }' 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.608 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.179 [2024-11-25 15:43:35.642401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:37.179 [2024-11-25 15:43:35.642492] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.179 [2024-11-25 15:43:35.642588] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.179 [2024-11-25 15:43:35.642698] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.179 [2024-11-25 15:43:35.642742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.179 [2024-11-25 15:43:35.738205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:37.179 [2024-11-25 15:43:35.738254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.179 [2024-11-25 15:43:35.738289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:37.179 [2024-11-25 15:43:35.738297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.179 [2024-11-25 15:43:35.740472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.179 [2024-11-25 15:43:35.740544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:37.179 [2024-11-25 15:43:35.740642] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:37.179 [2024-11-25 15:43:35.740714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:37.179 pt2 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.179 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.180 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.180 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.180 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.180 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.180 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.180 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.180 "name": "raid_bdev1", 00:16:37.180 "uuid": "5ce43d5a-88f9-4713-9033-ad80241d1719", 00:16:37.180 "strip_size_kb": 64, 00:16:37.180 "state": "configuring", 00:16:37.180 "raid_level": "raid5f", 00:16:37.180 "superblock": true, 00:16:37.180 "num_base_bdevs": 4, 00:16:37.180 "num_base_bdevs_discovered": 1, 00:16:37.180 "num_base_bdevs_operational": 3, 00:16:37.180 "base_bdevs_list": [ 00:16:37.180 { 00:16:37.180 "name": null, 00:16:37.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.180 "is_configured": false, 00:16:37.180 "data_offset": 2048, 00:16:37.180 "data_size": 63488 00:16:37.180 }, 00:16:37.180 { 00:16:37.180 "name": "pt2", 00:16:37.180 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.180 "is_configured": true, 00:16:37.180 "data_offset": 2048, 00:16:37.180 "data_size": 63488 00:16:37.180 }, 00:16:37.180 { 00:16:37.180 "name": null, 00:16:37.180 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:37.180 "is_configured": false, 00:16:37.180 "data_offset": 2048, 00:16:37.180 "data_size": 63488 00:16:37.180 }, 00:16:37.180 { 00:16:37.180 "name": null, 00:16:37.180 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:37.180 "is_configured": false, 00:16:37.180 "data_offset": 2048, 00:16:37.180 "data_size": 63488 00:16:37.180 } 00:16:37.180 ] 00:16:37.180 }' 00:16:37.180 15:43:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.180 15:43:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.758 [2024-11-25 15:43:36.141547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:37.758 [2024-11-25 15:43:36.141660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.758 [2024-11-25 15:43:36.141685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:37.758 [2024-11-25 15:43:36.141694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.758 [2024-11-25 15:43:36.142174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.758 [2024-11-25 15:43:36.142193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:37.758 [2024-11-25 15:43:36.142276] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:37.758 [2024-11-25 15:43:36.142305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:37.758 pt3 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.758 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.758 "name": "raid_bdev1", 00:16:37.758 "uuid": "5ce43d5a-88f9-4713-9033-ad80241d1719", 00:16:37.758 "strip_size_kb": 64, 00:16:37.758 "state": "configuring", 00:16:37.758 "raid_level": "raid5f", 00:16:37.758 "superblock": true, 00:16:37.758 "num_base_bdevs": 4, 00:16:37.758 "num_base_bdevs_discovered": 2, 00:16:37.758 "num_base_bdevs_operational": 3, 00:16:37.758 "base_bdevs_list": [ 00:16:37.758 { 00:16:37.758 "name": null, 00:16:37.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.758 "is_configured": false, 00:16:37.758 "data_offset": 2048, 00:16:37.758 "data_size": 63488 00:16:37.758 }, 00:16:37.758 { 00:16:37.758 "name": "pt2", 00:16:37.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.758 "is_configured": true, 00:16:37.758 "data_offset": 2048, 00:16:37.758 "data_size": 63488 00:16:37.758 }, 00:16:37.758 { 00:16:37.758 "name": "pt3", 00:16:37.758 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:37.758 "is_configured": true, 00:16:37.758 "data_offset": 2048, 00:16:37.758 "data_size": 63488 00:16:37.758 }, 00:16:37.758 { 00:16:37.758 "name": null, 00:16:37.759 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:37.759 "is_configured": false, 00:16:37.759 "data_offset": 2048, 00:16:37.759 "data_size": 63488 00:16:37.759 } 00:16:37.759 ] 00:16:37.759 }' 00:16:37.759 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.759 15:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.068 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.069 [2024-11-25 15:43:36.536878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:38.069 [2024-11-25 15:43:36.536988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.069 [2024-11-25 15:43:36.537058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:38.069 [2024-11-25 15:43:36.537099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.069 [2024-11-25 15:43:36.537572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.069 [2024-11-25 15:43:36.537628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:38.069 [2024-11-25 15:43:36.537739] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:38.069 [2024-11-25 15:43:36.537789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:38.069 [2024-11-25 15:43:36.537955] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:38.069 [2024-11-25 15:43:36.537991] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:38.069 [2024-11-25 15:43:36.538257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:38.069 [2024-11-25 15:43:36.544731] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:38.069 [2024-11-25 15:43:36.544794] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:38.069 [2024-11-25 15:43:36.545120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.069 pt4 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.069 "name": "raid_bdev1", 00:16:38.069 "uuid": "5ce43d5a-88f9-4713-9033-ad80241d1719", 00:16:38.069 "strip_size_kb": 64, 00:16:38.069 "state": "online", 00:16:38.069 "raid_level": "raid5f", 00:16:38.069 "superblock": true, 00:16:38.069 "num_base_bdevs": 4, 00:16:38.069 "num_base_bdevs_discovered": 3, 00:16:38.069 "num_base_bdevs_operational": 3, 00:16:38.069 "base_bdevs_list": [ 00:16:38.069 { 00:16:38.069 "name": null, 00:16:38.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.069 "is_configured": false, 00:16:38.069 "data_offset": 2048, 00:16:38.069 "data_size": 63488 00:16:38.069 }, 00:16:38.069 { 00:16:38.069 "name": "pt2", 00:16:38.069 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.069 "is_configured": true, 00:16:38.069 "data_offset": 2048, 00:16:38.069 "data_size": 63488 00:16:38.069 }, 00:16:38.069 { 00:16:38.069 "name": "pt3", 00:16:38.069 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:38.069 "is_configured": true, 00:16:38.069 "data_offset": 2048, 00:16:38.069 "data_size": 63488 00:16:38.069 }, 00:16:38.069 { 00:16:38.069 "name": "pt4", 00:16:38.069 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:38.069 "is_configured": true, 00:16:38.069 "data_offset": 2048, 00:16:38.069 "data_size": 63488 00:16:38.069 } 00:16:38.069 ] 00:16:38.069 }' 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.069 15:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.329 15:43:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:38.329 15:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.329 15:43:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.329 [2024-11-25 15:43:36.996900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:38.329 [2024-11-25 15:43:36.996984] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:38.329 [2024-11-25 15:43:36.997077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.329 [2024-11-25 15:43:36.997150] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.329 [2024-11-25 15:43:36.997162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:38.329 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.329 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.329 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:38.329 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.329 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.589 [2024-11-25 15:43:37.068762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:38.589 [2024-11-25 15:43:37.068827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.589 [2024-11-25 15:43:37.068852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:38.589 [2024-11-25 15:43:37.068863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.589 [2024-11-25 15:43:37.071241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.589 [2024-11-25 15:43:37.071280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:38.589 [2024-11-25 15:43:37.071369] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:38.589 [2024-11-25 15:43:37.071431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:38.589 [2024-11-25 15:43:37.071560] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:38.589 [2024-11-25 15:43:37.071572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:38.589 [2024-11-25 15:43:37.071587] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:38.589 [2024-11-25 15:43:37.071647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:38.589 [2024-11-25 15:43:37.071770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:38.589 pt1 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.589 "name": "raid_bdev1", 00:16:38.589 "uuid": "5ce43d5a-88f9-4713-9033-ad80241d1719", 00:16:38.589 "strip_size_kb": 64, 00:16:38.589 "state": "configuring", 00:16:38.589 "raid_level": "raid5f", 00:16:38.589 "superblock": true, 00:16:38.589 "num_base_bdevs": 4, 00:16:38.589 "num_base_bdevs_discovered": 2, 00:16:38.589 "num_base_bdevs_operational": 3, 00:16:38.589 "base_bdevs_list": [ 00:16:38.589 { 00:16:38.589 "name": null, 00:16:38.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.589 "is_configured": false, 00:16:38.589 "data_offset": 2048, 00:16:38.589 "data_size": 63488 00:16:38.589 }, 00:16:38.589 { 00:16:38.589 "name": "pt2", 00:16:38.589 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.589 "is_configured": true, 00:16:38.589 "data_offset": 2048, 00:16:38.589 "data_size": 63488 00:16:38.589 }, 00:16:38.589 { 00:16:38.589 "name": "pt3", 00:16:38.589 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:38.589 "is_configured": true, 00:16:38.589 "data_offset": 2048, 00:16:38.589 "data_size": 63488 00:16:38.589 }, 00:16:38.589 { 00:16:38.589 "name": null, 00:16:38.589 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:38.589 "is_configured": false, 00:16:38.589 "data_offset": 2048, 00:16:38.589 "data_size": 63488 00:16:38.589 } 00:16:38.589 ] 00:16:38.589 }' 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.589 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.849 [2024-11-25 15:43:37.500063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:38.849 [2024-11-25 15:43:37.500165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.849 [2024-11-25 15:43:37.500228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:38.849 [2024-11-25 15:43:37.500277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.849 [2024-11-25 15:43:37.500780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.849 [2024-11-25 15:43:37.500844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:38.849 [2024-11-25 15:43:37.500968] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:38.849 [2024-11-25 15:43:37.501045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:38.849 [2024-11-25 15:43:37.501233] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:38.849 [2024-11-25 15:43:37.501276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:38.849 [2024-11-25 15:43:37.501578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:38.849 [2024-11-25 15:43:37.509541] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:38.849 [2024-11-25 15:43:37.509603] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:38.849 [2024-11-25 15:43:37.509919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.849 pt4 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.849 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.109 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.109 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.109 "name": "raid_bdev1", 00:16:39.109 "uuid": "5ce43d5a-88f9-4713-9033-ad80241d1719", 00:16:39.109 "strip_size_kb": 64, 00:16:39.109 "state": "online", 00:16:39.109 "raid_level": "raid5f", 00:16:39.109 "superblock": true, 00:16:39.109 "num_base_bdevs": 4, 00:16:39.109 "num_base_bdevs_discovered": 3, 00:16:39.109 "num_base_bdevs_operational": 3, 00:16:39.109 "base_bdevs_list": [ 00:16:39.109 { 00:16:39.109 "name": null, 00:16:39.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.109 "is_configured": false, 00:16:39.109 "data_offset": 2048, 00:16:39.109 "data_size": 63488 00:16:39.109 }, 00:16:39.109 { 00:16:39.109 "name": "pt2", 00:16:39.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.109 "is_configured": true, 00:16:39.109 "data_offset": 2048, 00:16:39.109 "data_size": 63488 00:16:39.109 }, 00:16:39.109 { 00:16:39.109 "name": "pt3", 00:16:39.109 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:39.109 "is_configured": true, 00:16:39.109 "data_offset": 2048, 00:16:39.109 "data_size": 63488 00:16:39.109 }, 00:16:39.109 { 00:16:39.109 "name": "pt4", 00:16:39.109 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:39.109 "is_configured": true, 00:16:39.109 "data_offset": 2048, 00:16:39.109 "data_size": 63488 00:16:39.109 } 00:16:39.109 ] 00:16:39.109 }' 00:16:39.109 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.109 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.370 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:39.370 15:43:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:39.370 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.370 15:43:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.370 15:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.370 15:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:39.370 15:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:39.370 15:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:39.370 15:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.370 15:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.370 [2024-11-25 15:43:38.042032] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.630 15:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.630 15:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5ce43d5a-88f9-4713-9033-ad80241d1719 '!=' 5ce43d5a-88f9-4713-9033-ad80241d1719 ']' 00:16:39.630 15:43:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83715 00:16:39.630 15:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83715 ']' 00:16:39.630 15:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83715 00:16:39.630 15:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:39.630 15:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.630 15:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83715 00:16:39.630 15:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:39.630 15:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:39.630 15:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83715' 00:16:39.630 killing process with pid 83715 00:16:39.630 15:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83715 00:16:39.630 [2024-11-25 15:43:38.127235] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:39.630 [2024-11-25 15:43:38.127333] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.630 [2024-11-25 15:43:38.127411] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.630 [2024-11-25 15:43:38.127423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:39.630 15:43:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83715 00:16:39.890 [2024-11-25 15:43:38.494632] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:41.267 15:43:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:41.267 00:16:41.267 real 0m8.292s 00:16:41.267 user 0m13.076s 00:16:41.267 sys 0m1.519s 00:16:41.267 15:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.267 15:43:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.267 ************************************ 00:16:41.267 END TEST raid5f_superblock_test 00:16:41.267 ************************************ 00:16:41.267 15:43:39 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:41.267 15:43:39 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:41.267 15:43:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:41.267 15:43:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.267 15:43:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:41.267 ************************************ 00:16:41.267 START TEST raid5f_rebuild_test 00:16:41.267 ************************************ 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84201 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84201 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84201 ']' 00:16:41.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.267 15:43:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.267 [2024-11-25 15:43:39.707877] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:16:41.267 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:41.267 Zero copy mechanism will not be used. 00:16:41.267 [2024-11-25 15:43:39.708069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84201 ] 00:16:41.267 [2024-11-25 15:43:39.879189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.526 [2024-11-25 15:43:39.983598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.526 [2024-11-25 15:43:40.164803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.526 [2024-11-25 15:43:40.164854] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.095 BaseBdev1_malloc 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.095 [2024-11-25 15:43:40.585269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:42.095 [2024-11-25 15:43:40.585333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.095 [2024-11-25 15:43:40.585358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:42.095 [2024-11-25 15:43:40.585369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.095 [2024-11-25 15:43:40.587397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.095 [2024-11-25 15:43:40.587438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:42.095 BaseBdev1 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.095 BaseBdev2_malloc 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.095 [2024-11-25 15:43:40.638440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:42.095 [2024-11-25 15:43:40.638497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.095 [2024-11-25 15:43:40.638517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:42.095 [2024-11-25 15:43:40.638528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.095 [2024-11-25 15:43:40.640590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.095 [2024-11-25 15:43:40.640629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:42.095 BaseBdev2 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.095 BaseBdev3_malloc 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.095 [2024-11-25 15:43:40.701905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:42.095 [2024-11-25 15:43:40.702003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.095 [2024-11-25 15:43:40.702038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:42.095 [2024-11-25 15:43:40.702050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.095 [2024-11-25 15:43:40.704203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.095 [2024-11-25 15:43:40.704243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:42.095 BaseBdev3 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.095 BaseBdev4_malloc 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.095 [2024-11-25 15:43:40.758641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:42.095 [2024-11-25 15:43:40.758696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.095 [2024-11-25 15:43:40.758731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:42.095 [2024-11-25 15:43:40.758742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.095 [2024-11-25 15:43:40.760795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.095 [2024-11-25 15:43:40.760889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:42.095 BaseBdev4 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.095 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.355 spare_malloc 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.356 spare_delay 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.356 [2024-11-25 15:43:40.824072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:42.356 [2024-11-25 15:43:40.824124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.356 [2024-11-25 15:43:40.824159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:42.356 [2024-11-25 15:43:40.824170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.356 [2024-11-25 15:43:40.826164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.356 [2024-11-25 15:43:40.826203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:42.356 spare 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.356 [2024-11-25 15:43:40.836103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.356 [2024-11-25 15:43:40.837839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:42.356 [2024-11-25 15:43:40.837898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:42.356 [2024-11-25 15:43:40.837946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:42.356 [2024-11-25 15:43:40.838041] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:42.356 [2024-11-25 15:43:40.838054] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:42.356 [2024-11-25 15:43:40.838293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:42.356 [2024-11-25 15:43:40.845370] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:42.356 [2024-11-25 15:43:40.845389] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:42.356 [2024-11-25 15:43:40.845588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.356 "name": "raid_bdev1", 00:16:42.356 "uuid": "20823668-7973-4dad-a396-3e90193ccb8a", 00:16:42.356 "strip_size_kb": 64, 00:16:42.356 "state": "online", 00:16:42.356 "raid_level": "raid5f", 00:16:42.356 "superblock": false, 00:16:42.356 "num_base_bdevs": 4, 00:16:42.356 "num_base_bdevs_discovered": 4, 00:16:42.356 "num_base_bdevs_operational": 4, 00:16:42.356 "base_bdevs_list": [ 00:16:42.356 { 00:16:42.356 "name": "BaseBdev1", 00:16:42.356 "uuid": "16eed5b8-23a6-5b8c-8d8a-d2108c035d13", 00:16:42.356 "is_configured": true, 00:16:42.356 "data_offset": 0, 00:16:42.356 "data_size": 65536 00:16:42.356 }, 00:16:42.356 { 00:16:42.356 "name": "BaseBdev2", 00:16:42.356 "uuid": "fe353865-cabc-5187-8c41-29e1986dffc2", 00:16:42.356 "is_configured": true, 00:16:42.356 "data_offset": 0, 00:16:42.356 "data_size": 65536 00:16:42.356 }, 00:16:42.356 { 00:16:42.356 "name": "BaseBdev3", 00:16:42.356 "uuid": "90662dc4-6cc0-53ba-af43-cb0c22c90d93", 00:16:42.356 "is_configured": true, 00:16:42.356 "data_offset": 0, 00:16:42.356 "data_size": 65536 00:16:42.356 }, 00:16:42.356 { 00:16:42.356 "name": "BaseBdev4", 00:16:42.356 "uuid": "632904fd-26fa-5354-84ff-e82fca1b56d8", 00:16:42.356 "is_configured": true, 00:16:42.356 "data_offset": 0, 00:16:42.356 "data_size": 65536 00:16:42.356 } 00:16:42.356 ] 00:16:42.356 }' 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.356 15:43:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.924 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.925 [2024-11-25 15:43:41.317432] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:42.925 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:42.925 [2024-11-25 15:43:41.592899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:43.184 /dev/nbd0 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:43.184 1+0 records in 00:16:43.184 1+0 records out 00:16:43.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282109 s, 14.5 MB/s 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:43.184 15:43:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:43.754 512+0 records in 00:16:43.754 512+0 records out 00:16:43.754 100663296 bytes (101 MB, 96 MiB) copied, 0.488714 s, 206 MB/s 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:43.754 [2024-11-25 15:43:42.354165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.754 [2024-11-25 15:43:42.368946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.754 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.754 "name": "raid_bdev1", 00:16:43.754 "uuid": "20823668-7973-4dad-a396-3e90193ccb8a", 00:16:43.754 "strip_size_kb": 64, 00:16:43.754 "state": "online", 00:16:43.754 "raid_level": "raid5f", 00:16:43.754 "superblock": false, 00:16:43.754 "num_base_bdevs": 4, 00:16:43.754 "num_base_bdevs_discovered": 3, 00:16:43.754 "num_base_bdevs_operational": 3, 00:16:43.754 "base_bdevs_list": [ 00:16:43.754 { 00:16:43.754 "name": null, 00:16:43.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.754 "is_configured": false, 00:16:43.754 "data_offset": 0, 00:16:43.754 "data_size": 65536 00:16:43.754 }, 00:16:43.754 { 00:16:43.754 "name": "BaseBdev2", 00:16:43.754 "uuid": "fe353865-cabc-5187-8c41-29e1986dffc2", 00:16:43.754 "is_configured": true, 00:16:43.754 "data_offset": 0, 00:16:43.754 "data_size": 65536 00:16:43.754 }, 00:16:43.754 { 00:16:43.754 "name": "BaseBdev3", 00:16:43.755 "uuid": "90662dc4-6cc0-53ba-af43-cb0c22c90d93", 00:16:43.755 "is_configured": true, 00:16:43.755 "data_offset": 0, 00:16:43.755 "data_size": 65536 00:16:43.755 }, 00:16:43.755 { 00:16:43.755 "name": "BaseBdev4", 00:16:43.755 "uuid": "632904fd-26fa-5354-84ff-e82fca1b56d8", 00:16:43.755 "is_configured": true, 00:16:43.755 "data_offset": 0, 00:16:43.755 "data_size": 65536 00:16:43.755 } 00:16:43.755 ] 00:16:43.755 }' 00:16:43.755 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.755 15:43:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.324 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:44.324 15:43:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.324 15:43:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.324 [2024-11-25 15:43:42.792197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:44.324 [2024-11-25 15:43:42.808460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:44.324 15:43:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.324 15:43:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:44.324 [2024-11-25 15:43:42.817769] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:45.262 15:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.262 15:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.262 15:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.262 15:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.262 15:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.262 15:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.262 15:43:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.262 15:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.262 15:43:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.262 15:43:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.262 15:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.262 "name": "raid_bdev1", 00:16:45.262 "uuid": "20823668-7973-4dad-a396-3e90193ccb8a", 00:16:45.262 "strip_size_kb": 64, 00:16:45.262 "state": "online", 00:16:45.262 "raid_level": "raid5f", 00:16:45.262 "superblock": false, 00:16:45.262 "num_base_bdevs": 4, 00:16:45.262 "num_base_bdevs_discovered": 4, 00:16:45.262 "num_base_bdevs_operational": 4, 00:16:45.262 "process": { 00:16:45.262 "type": "rebuild", 00:16:45.262 "target": "spare", 00:16:45.262 "progress": { 00:16:45.262 "blocks": 19200, 00:16:45.262 "percent": 9 00:16:45.262 } 00:16:45.262 }, 00:16:45.262 "base_bdevs_list": [ 00:16:45.262 { 00:16:45.262 "name": "spare", 00:16:45.262 "uuid": "239ce912-ff85-54f1-99d8-4edf6366455f", 00:16:45.262 "is_configured": true, 00:16:45.262 "data_offset": 0, 00:16:45.262 "data_size": 65536 00:16:45.262 }, 00:16:45.262 { 00:16:45.262 "name": "BaseBdev2", 00:16:45.262 "uuid": "fe353865-cabc-5187-8c41-29e1986dffc2", 00:16:45.262 "is_configured": true, 00:16:45.262 "data_offset": 0, 00:16:45.262 "data_size": 65536 00:16:45.262 }, 00:16:45.262 { 00:16:45.262 "name": "BaseBdev3", 00:16:45.262 "uuid": "90662dc4-6cc0-53ba-af43-cb0c22c90d93", 00:16:45.262 "is_configured": true, 00:16:45.262 "data_offset": 0, 00:16:45.262 "data_size": 65536 00:16:45.262 }, 00:16:45.262 { 00:16:45.262 "name": "BaseBdev4", 00:16:45.262 "uuid": "632904fd-26fa-5354-84ff-e82fca1b56d8", 00:16:45.262 "is_configured": true, 00:16:45.262 "data_offset": 0, 00:16:45.262 "data_size": 65536 00:16:45.262 } 00:16:45.262 ] 00:16:45.262 }' 00:16:45.262 15:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.262 15:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.262 15:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.522 15:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.522 15:43:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:45.522 15:43:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.522 15:43:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.522 [2024-11-25 15:43:43.968260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:45.522 [2024-11-25 15:43:44.023751] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:45.522 [2024-11-25 15:43:44.023816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.522 [2024-11-25 15:43:44.023833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:45.522 [2024-11-25 15:43:44.023843] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:45.522 15:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.522 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:45.522 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.522 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.522 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.522 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.522 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.522 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.522 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.522 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.522 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.522 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.522 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.522 15:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.522 15:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.522 15:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.522 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.522 "name": "raid_bdev1", 00:16:45.522 "uuid": "20823668-7973-4dad-a396-3e90193ccb8a", 00:16:45.522 "strip_size_kb": 64, 00:16:45.522 "state": "online", 00:16:45.522 "raid_level": "raid5f", 00:16:45.522 "superblock": false, 00:16:45.522 "num_base_bdevs": 4, 00:16:45.522 "num_base_bdevs_discovered": 3, 00:16:45.522 "num_base_bdevs_operational": 3, 00:16:45.522 "base_bdevs_list": [ 00:16:45.522 { 00:16:45.522 "name": null, 00:16:45.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.522 "is_configured": false, 00:16:45.522 "data_offset": 0, 00:16:45.522 "data_size": 65536 00:16:45.522 }, 00:16:45.522 { 00:16:45.522 "name": "BaseBdev2", 00:16:45.522 "uuid": "fe353865-cabc-5187-8c41-29e1986dffc2", 00:16:45.522 "is_configured": true, 00:16:45.522 "data_offset": 0, 00:16:45.522 "data_size": 65536 00:16:45.522 }, 00:16:45.522 { 00:16:45.522 "name": "BaseBdev3", 00:16:45.522 "uuid": "90662dc4-6cc0-53ba-af43-cb0c22c90d93", 00:16:45.522 "is_configured": true, 00:16:45.522 "data_offset": 0, 00:16:45.522 "data_size": 65536 00:16:45.522 }, 00:16:45.522 { 00:16:45.522 "name": "BaseBdev4", 00:16:45.522 "uuid": "632904fd-26fa-5354-84ff-e82fca1b56d8", 00:16:45.522 "is_configured": true, 00:16:45.522 "data_offset": 0, 00:16:45.522 "data_size": 65536 00:16:45.522 } 00:16:45.522 ] 00:16:45.522 }' 00:16:45.522 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.522 15:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.091 "name": "raid_bdev1", 00:16:46.091 "uuid": "20823668-7973-4dad-a396-3e90193ccb8a", 00:16:46.091 "strip_size_kb": 64, 00:16:46.091 "state": "online", 00:16:46.091 "raid_level": "raid5f", 00:16:46.091 "superblock": false, 00:16:46.091 "num_base_bdevs": 4, 00:16:46.091 "num_base_bdevs_discovered": 3, 00:16:46.091 "num_base_bdevs_operational": 3, 00:16:46.091 "base_bdevs_list": [ 00:16:46.091 { 00:16:46.091 "name": null, 00:16:46.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.091 "is_configured": false, 00:16:46.091 "data_offset": 0, 00:16:46.091 "data_size": 65536 00:16:46.091 }, 00:16:46.091 { 00:16:46.091 "name": "BaseBdev2", 00:16:46.091 "uuid": "fe353865-cabc-5187-8c41-29e1986dffc2", 00:16:46.091 "is_configured": true, 00:16:46.091 "data_offset": 0, 00:16:46.091 "data_size": 65536 00:16:46.091 }, 00:16:46.091 { 00:16:46.091 "name": "BaseBdev3", 00:16:46.091 "uuid": "90662dc4-6cc0-53ba-af43-cb0c22c90d93", 00:16:46.091 "is_configured": true, 00:16:46.091 "data_offset": 0, 00:16:46.091 "data_size": 65536 00:16:46.091 }, 00:16:46.091 { 00:16:46.091 "name": "BaseBdev4", 00:16:46.091 "uuid": "632904fd-26fa-5354-84ff-e82fca1b56d8", 00:16:46.091 "is_configured": true, 00:16:46.091 "data_offset": 0, 00:16:46.091 "data_size": 65536 00:16:46.091 } 00:16:46.091 ] 00:16:46.091 }' 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.091 [2024-11-25 15:43:44.677157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:46.091 [2024-11-25 15:43:44.692117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.091 15:43:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:46.091 [2024-11-25 15:43:44.701380] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:47.027 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.027 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.027 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.027 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.027 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.027 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.027 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.027 15:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.027 15:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.287 15:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.287 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.287 "name": "raid_bdev1", 00:16:47.287 "uuid": "20823668-7973-4dad-a396-3e90193ccb8a", 00:16:47.287 "strip_size_kb": 64, 00:16:47.287 "state": "online", 00:16:47.287 "raid_level": "raid5f", 00:16:47.287 "superblock": false, 00:16:47.287 "num_base_bdevs": 4, 00:16:47.287 "num_base_bdevs_discovered": 4, 00:16:47.287 "num_base_bdevs_operational": 4, 00:16:47.287 "process": { 00:16:47.287 "type": "rebuild", 00:16:47.287 "target": "spare", 00:16:47.287 "progress": { 00:16:47.287 "blocks": 19200, 00:16:47.287 "percent": 9 00:16:47.287 } 00:16:47.287 }, 00:16:47.287 "base_bdevs_list": [ 00:16:47.287 { 00:16:47.287 "name": "spare", 00:16:47.287 "uuid": "239ce912-ff85-54f1-99d8-4edf6366455f", 00:16:47.287 "is_configured": true, 00:16:47.287 "data_offset": 0, 00:16:47.287 "data_size": 65536 00:16:47.287 }, 00:16:47.287 { 00:16:47.287 "name": "BaseBdev2", 00:16:47.287 "uuid": "fe353865-cabc-5187-8c41-29e1986dffc2", 00:16:47.287 "is_configured": true, 00:16:47.287 "data_offset": 0, 00:16:47.287 "data_size": 65536 00:16:47.287 }, 00:16:47.287 { 00:16:47.287 "name": "BaseBdev3", 00:16:47.287 "uuid": "90662dc4-6cc0-53ba-af43-cb0c22c90d93", 00:16:47.287 "is_configured": true, 00:16:47.287 "data_offset": 0, 00:16:47.287 "data_size": 65536 00:16:47.287 }, 00:16:47.287 { 00:16:47.287 "name": "BaseBdev4", 00:16:47.287 "uuid": "632904fd-26fa-5354-84ff-e82fca1b56d8", 00:16:47.287 "is_configured": true, 00:16:47.287 "data_offset": 0, 00:16:47.287 "data_size": 65536 00:16:47.287 } 00:16:47.287 ] 00:16:47.287 }' 00:16:47.287 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.287 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.287 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.287 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=598 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.288 "name": "raid_bdev1", 00:16:47.288 "uuid": "20823668-7973-4dad-a396-3e90193ccb8a", 00:16:47.288 "strip_size_kb": 64, 00:16:47.288 "state": "online", 00:16:47.288 "raid_level": "raid5f", 00:16:47.288 "superblock": false, 00:16:47.288 "num_base_bdevs": 4, 00:16:47.288 "num_base_bdevs_discovered": 4, 00:16:47.288 "num_base_bdevs_operational": 4, 00:16:47.288 "process": { 00:16:47.288 "type": "rebuild", 00:16:47.288 "target": "spare", 00:16:47.288 "progress": { 00:16:47.288 "blocks": 21120, 00:16:47.288 "percent": 10 00:16:47.288 } 00:16:47.288 }, 00:16:47.288 "base_bdevs_list": [ 00:16:47.288 { 00:16:47.288 "name": "spare", 00:16:47.288 "uuid": "239ce912-ff85-54f1-99d8-4edf6366455f", 00:16:47.288 "is_configured": true, 00:16:47.288 "data_offset": 0, 00:16:47.288 "data_size": 65536 00:16:47.288 }, 00:16:47.288 { 00:16:47.288 "name": "BaseBdev2", 00:16:47.288 "uuid": "fe353865-cabc-5187-8c41-29e1986dffc2", 00:16:47.288 "is_configured": true, 00:16:47.288 "data_offset": 0, 00:16:47.288 "data_size": 65536 00:16:47.288 }, 00:16:47.288 { 00:16:47.288 "name": "BaseBdev3", 00:16:47.288 "uuid": "90662dc4-6cc0-53ba-af43-cb0c22c90d93", 00:16:47.288 "is_configured": true, 00:16:47.288 "data_offset": 0, 00:16:47.288 "data_size": 65536 00:16:47.288 }, 00:16:47.288 { 00:16:47.288 "name": "BaseBdev4", 00:16:47.288 "uuid": "632904fd-26fa-5354-84ff-e82fca1b56d8", 00:16:47.288 "is_configured": true, 00:16:47.288 "data_offset": 0, 00:16:47.288 "data_size": 65536 00:16:47.288 } 00:16:47.288 ] 00:16:47.288 }' 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.288 15:43:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:48.666 15:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:48.666 15:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.666 15:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.666 15:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.666 15:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.666 15:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.666 15:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.666 15:43:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.666 15:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.666 15:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.666 15:43:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.666 15:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.666 "name": "raid_bdev1", 00:16:48.666 "uuid": "20823668-7973-4dad-a396-3e90193ccb8a", 00:16:48.666 "strip_size_kb": 64, 00:16:48.666 "state": "online", 00:16:48.666 "raid_level": "raid5f", 00:16:48.666 "superblock": false, 00:16:48.666 "num_base_bdevs": 4, 00:16:48.666 "num_base_bdevs_discovered": 4, 00:16:48.666 "num_base_bdevs_operational": 4, 00:16:48.666 "process": { 00:16:48.666 "type": "rebuild", 00:16:48.666 "target": "spare", 00:16:48.666 "progress": { 00:16:48.666 "blocks": 42240, 00:16:48.666 "percent": 21 00:16:48.666 } 00:16:48.666 }, 00:16:48.666 "base_bdevs_list": [ 00:16:48.666 { 00:16:48.666 "name": "spare", 00:16:48.666 "uuid": "239ce912-ff85-54f1-99d8-4edf6366455f", 00:16:48.666 "is_configured": true, 00:16:48.666 "data_offset": 0, 00:16:48.666 "data_size": 65536 00:16:48.666 }, 00:16:48.666 { 00:16:48.666 "name": "BaseBdev2", 00:16:48.666 "uuid": "fe353865-cabc-5187-8c41-29e1986dffc2", 00:16:48.666 "is_configured": true, 00:16:48.666 "data_offset": 0, 00:16:48.666 "data_size": 65536 00:16:48.666 }, 00:16:48.666 { 00:16:48.666 "name": "BaseBdev3", 00:16:48.666 "uuid": "90662dc4-6cc0-53ba-af43-cb0c22c90d93", 00:16:48.666 "is_configured": true, 00:16:48.666 "data_offset": 0, 00:16:48.666 "data_size": 65536 00:16:48.666 }, 00:16:48.666 { 00:16:48.666 "name": "BaseBdev4", 00:16:48.666 "uuid": "632904fd-26fa-5354-84ff-e82fca1b56d8", 00:16:48.666 "is_configured": true, 00:16:48.666 "data_offset": 0, 00:16:48.666 "data_size": 65536 00:16:48.666 } 00:16:48.666 ] 00:16:48.666 }' 00:16:48.666 15:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.666 15:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.666 15:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.666 15:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.666 15:43:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:49.605 15:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:49.605 15:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.605 15:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.605 15:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.605 15:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.605 15:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.605 15:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.605 15:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.605 15:43:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.605 15:43:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.605 15:43:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.605 15:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.605 "name": "raid_bdev1", 00:16:49.605 "uuid": "20823668-7973-4dad-a396-3e90193ccb8a", 00:16:49.605 "strip_size_kb": 64, 00:16:49.605 "state": "online", 00:16:49.605 "raid_level": "raid5f", 00:16:49.605 "superblock": false, 00:16:49.605 "num_base_bdevs": 4, 00:16:49.605 "num_base_bdevs_discovered": 4, 00:16:49.605 "num_base_bdevs_operational": 4, 00:16:49.605 "process": { 00:16:49.605 "type": "rebuild", 00:16:49.605 "target": "spare", 00:16:49.605 "progress": { 00:16:49.605 "blocks": 65280, 00:16:49.606 "percent": 33 00:16:49.606 } 00:16:49.606 }, 00:16:49.606 "base_bdevs_list": [ 00:16:49.606 { 00:16:49.606 "name": "spare", 00:16:49.606 "uuid": "239ce912-ff85-54f1-99d8-4edf6366455f", 00:16:49.606 "is_configured": true, 00:16:49.606 "data_offset": 0, 00:16:49.606 "data_size": 65536 00:16:49.606 }, 00:16:49.606 { 00:16:49.606 "name": "BaseBdev2", 00:16:49.606 "uuid": "fe353865-cabc-5187-8c41-29e1986dffc2", 00:16:49.606 "is_configured": true, 00:16:49.606 "data_offset": 0, 00:16:49.606 "data_size": 65536 00:16:49.606 }, 00:16:49.606 { 00:16:49.606 "name": "BaseBdev3", 00:16:49.606 "uuid": "90662dc4-6cc0-53ba-af43-cb0c22c90d93", 00:16:49.606 "is_configured": true, 00:16:49.606 "data_offset": 0, 00:16:49.606 "data_size": 65536 00:16:49.606 }, 00:16:49.606 { 00:16:49.606 "name": "BaseBdev4", 00:16:49.606 "uuid": "632904fd-26fa-5354-84ff-e82fca1b56d8", 00:16:49.606 "is_configured": true, 00:16:49.606 "data_offset": 0, 00:16:49.606 "data_size": 65536 00:16:49.606 } 00:16:49.606 ] 00:16:49.606 }' 00:16:49.606 15:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.606 15:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.606 15:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.606 15:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.606 15:43:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:50.987 15:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.987 15:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.987 15:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.987 15:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.987 15:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.987 15:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.987 15:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.987 15:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.987 15:43:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.987 15:43:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.987 15:43:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.987 15:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.987 "name": "raid_bdev1", 00:16:50.987 "uuid": "20823668-7973-4dad-a396-3e90193ccb8a", 00:16:50.987 "strip_size_kb": 64, 00:16:50.987 "state": "online", 00:16:50.987 "raid_level": "raid5f", 00:16:50.987 "superblock": false, 00:16:50.987 "num_base_bdevs": 4, 00:16:50.987 "num_base_bdevs_discovered": 4, 00:16:50.987 "num_base_bdevs_operational": 4, 00:16:50.987 "process": { 00:16:50.987 "type": "rebuild", 00:16:50.987 "target": "spare", 00:16:50.987 "progress": { 00:16:50.987 "blocks": 86400, 00:16:50.987 "percent": 43 00:16:50.987 } 00:16:50.987 }, 00:16:50.987 "base_bdevs_list": [ 00:16:50.987 { 00:16:50.987 "name": "spare", 00:16:50.987 "uuid": "239ce912-ff85-54f1-99d8-4edf6366455f", 00:16:50.987 "is_configured": true, 00:16:50.987 "data_offset": 0, 00:16:50.987 "data_size": 65536 00:16:50.987 }, 00:16:50.987 { 00:16:50.987 "name": "BaseBdev2", 00:16:50.987 "uuid": "fe353865-cabc-5187-8c41-29e1986dffc2", 00:16:50.987 "is_configured": true, 00:16:50.987 "data_offset": 0, 00:16:50.987 "data_size": 65536 00:16:50.987 }, 00:16:50.987 { 00:16:50.987 "name": "BaseBdev3", 00:16:50.987 "uuid": "90662dc4-6cc0-53ba-af43-cb0c22c90d93", 00:16:50.987 "is_configured": true, 00:16:50.987 "data_offset": 0, 00:16:50.987 "data_size": 65536 00:16:50.987 }, 00:16:50.987 { 00:16:50.987 "name": "BaseBdev4", 00:16:50.987 "uuid": "632904fd-26fa-5354-84ff-e82fca1b56d8", 00:16:50.987 "is_configured": true, 00:16:50.987 "data_offset": 0, 00:16:50.987 "data_size": 65536 00:16:50.987 } 00:16:50.987 ] 00:16:50.987 }' 00:16:50.987 15:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.987 15:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.987 15:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.987 15:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.987 15:43:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:51.927 15:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:51.927 15:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.927 15:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.927 15:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.927 15:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.927 15:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.927 15:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.927 15:43:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.927 15:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.927 15:43:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.927 15:43:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.927 15:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.927 "name": "raid_bdev1", 00:16:51.927 "uuid": "20823668-7973-4dad-a396-3e90193ccb8a", 00:16:51.927 "strip_size_kb": 64, 00:16:51.927 "state": "online", 00:16:51.927 "raid_level": "raid5f", 00:16:51.927 "superblock": false, 00:16:51.927 "num_base_bdevs": 4, 00:16:51.927 "num_base_bdevs_discovered": 4, 00:16:51.927 "num_base_bdevs_operational": 4, 00:16:51.927 "process": { 00:16:51.927 "type": "rebuild", 00:16:51.927 "target": "spare", 00:16:51.927 "progress": { 00:16:51.927 "blocks": 107520, 00:16:51.927 "percent": 54 00:16:51.927 } 00:16:51.927 }, 00:16:51.927 "base_bdevs_list": [ 00:16:51.927 { 00:16:51.927 "name": "spare", 00:16:51.927 "uuid": "239ce912-ff85-54f1-99d8-4edf6366455f", 00:16:51.927 "is_configured": true, 00:16:51.927 "data_offset": 0, 00:16:51.927 "data_size": 65536 00:16:51.927 }, 00:16:51.927 { 00:16:51.927 "name": "BaseBdev2", 00:16:51.927 "uuid": "fe353865-cabc-5187-8c41-29e1986dffc2", 00:16:51.927 "is_configured": true, 00:16:51.927 "data_offset": 0, 00:16:51.927 "data_size": 65536 00:16:51.927 }, 00:16:51.927 { 00:16:51.927 "name": "BaseBdev3", 00:16:51.927 "uuid": "90662dc4-6cc0-53ba-af43-cb0c22c90d93", 00:16:51.927 "is_configured": true, 00:16:51.927 "data_offset": 0, 00:16:51.927 "data_size": 65536 00:16:51.927 }, 00:16:51.927 { 00:16:51.927 "name": "BaseBdev4", 00:16:51.927 "uuid": "632904fd-26fa-5354-84ff-e82fca1b56d8", 00:16:51.927 "is_configured": true, 00:16:51.927 "data_offset": 0, 00:16:51.927 "data_size": 65536 00:16:51.927 } 00:16:51.927 ] 00:16:51.927 }' 00:16:51.927 15:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.927 15:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.927 15:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.927 15:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.927 15:43:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:52.884 15:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.884 15:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.884 15:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.884 15:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.884 15:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.884 15:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.885 15:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.885 15:43:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.885 15:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.885 15:43:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.144 15:43:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.144 15:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.144 "name": "raid_bdev1", 00:16:53.144 "uuid": "20823668-7973-4dad-a396-3e90193ccb8a", 00:16:53.144 "strip_size_kb": 64, 00:16:53.144 "state": "online", 00:16:53.144 "raid_level": "raid5f", 00:16:53.144 "superblock": false, 00:16:53.144 "num_base_bdevs": 4, 00:16:53.144 "num_base_bdevs_discovered": 4, 00:16:53.144 "num_base_bdevs_operational": 4, 00:16:53.144 "process": { 00:16:53.144 "type": "rebuild", 00:16:53.144 "target": "spare", 00:16:53.144 "progress": { 00:16:53.144 "blocks": 130560, 00:16:53.144 "percent": 66 00:16:53.144 } 00:16:53.144 }, 00:16:53.144 "base_bdevs_list": [ 00:16:53.144 { 00:16:53.144 "name": "spare", 00:16:53.144 "uuid": "239ce912-ff85-54f1-99d8-4edf6366455f", 00:16:53.144 "is_configured": true, 00:16:53.144 "data_offset": 0, 00:16:53.144 "data_size": 65536 00:16:53.144 }, 00:16:53.144 { 00:16:53.144 "name": "BaseBdev2", 00:16:53.144 "uuid": "fe353865-cabc-5187-8c41-29e1986dffc2", 00:16:53.144 "is_configured": true, 00:16:53.144 "data_offset": 0, 00:16:53.144 "data_size": 65536 00:16:53.144 }, 00:16:53.144 { 00:16:53.144 "name": "BaseBdev3", 00:16:53.144 "uuid": "90662dc4-6cc0-53ba-af43-cb0c22c90d93", 00:16:53.144 "is_configured": true, 00:16:53.145 "data_offset": 0, 00:16:53.145 "data_size": 65536 00:16:53.145 }, 00:16:53.145 { 00:16:53.145 "name": "BaseBdev4", 00:16:53.145 "uuid": "632904fd-26fa-5354-84ff-e82fca1b56d8", 00:16:53.145 "is_configured": true, 00:16:53.145 "data_offset": 0, 00:16:53.145 "data_size": 65536 00:16:53.145 } 00:16:53.145 ] 00:16:53.145 }' 00:16:53.145 15:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.145 15:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.145 15:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.145 15:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.145 15:43:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:54.083 15:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.083 15:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.083 15:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.083 15:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.083 15:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.083 15:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.083 15:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.083 15:43:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.083 15:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.083 15:43:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.083 15:43:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.083 15:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.083 "name": "raid_bdev1", 00:16:54.083 "uuid": "20823668-7973-4dad-a396-3e90193ccb8a", 00:16:54.083 "strip_size_kb": 64, 00:16:54.083 "state": "online", 00:16:54.083 "raid_level": "raid5f", 00:16:54.083 "superblock": false, 00:16:54.083 "num_base_bdevs": 4, 00:16:54.083 "num_base_bdevs_discovered": 4, 00:16:54.083 "num_base_bdevs_operational": 4, 00:16:54.083 "process": { 00:16:54.083 "type": "rebuild", 00:16:54.083 "target": "spare", 00:16:54.083 "progress": { 00:16:54.083 "blocks": 151680, 00:16:54.083 "percent": 77 00:16:54.083 } 00:16:54.083 }, 00:16:54.083 "base_bdevs_list": [ 00:16:54.083 { 00:16:54.083 "name": "spare", 00:16:54.083 "uuid": "239ce912-ff85-54f1-99d8-4edf6366455f", 00:16:54.083 "is_configured": true, 00:16:54.083 "data_offset": 0, 00:16:54.084 "data_size": 65536 00:16:54.084 }, 00:16:54.084 { 00:16:54.084 "name": "BaseBdev2", 00:16:54.084 "uuid": "fe353865-cabc-5187-8c41-29e1986dffc2", 00:16:54.084 "is_configured": true, 00:16:54.084 "data_offset": 0, 00:16:54.084 "data_size": 65536 00:16:54.084 }, 00:16:54.084 { 00:16:54.084 "name": "BaseBdev3", 00:16:54.084 "uuid": "90662dc4-6cc0-53ba-af43-cb0c22c90d93", 00:16:54.084 "is_configured": true, 00:16:54.084 "data_offset": 0, 00:16:54.084 "data_size": 65536 00:16:54.084 }, 00:16:54.084 { 00:16:54.084 "name": "BaseBdev4", 00:16:54.084 "uuid": "632904fd-26fa-5354-84ff-e82fca1b56d8", 00:16:54.084 "is_configured": true, 00:16:54.084 "data_offset": 0, 00:16:54.084 "data_size": 65536 00:16:54.084 } 00:16:54.084 ] 00:16:54.084 }' 00:16:54.084 15:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.344 15:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.344 15:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.344 15:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.344 15:43:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.283 15:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.283 15:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.283 15:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.283 15:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.283 15:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.283 15:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.283 15:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.283 15:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.283 15:43:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.283 15:43:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.283 15:43:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.283 15:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.283 "name": "raid_bdev1", 00:16:55.283 "uuid": "20823668-7973-4dad-a396-3e90193ccb8a", 00:16:55.283 "strip_size_kb": 64, 00:16:55.283 "state": "online", 00:16:55.283 "raid_level": "raid5f", 00:16:55.283 "superblock": false, 00:16:55.283 "num_base_bdevs": 4, 00:16:55.283 "num_base_bdevs_discovered": 4, 00:16:55.283 "num_base_bdevs_operational": 4, 00:16:55.283 "process": { 00:16:55.283 "type": "rebuild", 00:16:55.283 "target": "spare", 00:16:55.283 "progress": { 00:16:55.283 "blocks": 174720, 00:16:55.283 "percent": 88 00:16:55.283 } 00:16:55.283 }, 00:16:55.283 "base_bdevs_list": [ 00:16:55.283 { 00:16:55.283 "name": "spare", 00:16:55.283 "uuid": "239ce912-ff85-54f1-99d8-4edf6366455f", 00:16:55.283 "is_configured": true, 00:16:55.283 "data_offset": 0, 00:16:55.283 "data_size": 65536 00:16:55.283 }, 00:16:55.283 { 00:16:55.283 "name": "BaseBdev2", 00:16:55.283 "uuid": "fe353865-cabc-5187-8c41-29e1986dffc2", 00:16:55.283 "is_configured": true, 00:16:55.283 "data_offset": 0, 00:16:55.283 "data_size": 65536 00:16:55.283 }, 00:16:55.283 { 00:16:55.283 "name": "BaseBdev3", 00:16:55.283 "uuid": "90662dc4-6cc0-53ba-af43-cb0c22c90d93", 00:16:55.283 "is_configured": true, 00:16:55.283 "data_offset": 0, 00:16:55.283 "data_size": 65536 00:16:55.283 }, 00:16:55.283 { 00:16:55.283 "name": "BaseBdev4", 00:16:55.283 "uuid": "632904fd-26fa-5354-84ff-e82fca1b56d8", 00:16:55.283 "is_configured": true, 00:16:55.283 "data_offset": 0, 00:16:55.283 "data_size": 65536 00:16:55.283 } 00:16:55.283 ] 00:16:55.283 }' 00:16:55.283 15:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.283 15:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.283 15:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.543 15:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.543 15:43:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:56.483 15:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:56.483 15:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.483 15:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.483 15:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.483 15:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.483 15:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.483 15:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.483 15:43:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.483 15:43:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.483 15:43:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.483 15:43:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.483 15:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.483 "name": "raid_bdev1", 00:16:56.483 "uuid": "20823668-7973-4dad-a396-3e90193ccb8a", 00:16:56.483 "strip_size_kb": 64, 00:16:56.483 "state": "online", 00:16:56.483 "raid_level": "raid5f", 00:16:56.483 "superblock": false, 00:16:56.483 "num_base_bdevs": 4, 00:16:56.483 "num_base_bdevs_discovered": 4, 00:16:56.483 "num_base_bdevs_operational": 4, 00:16:56.483 "process": { 00:16:56.483 "type": "rebuild", 00:16:56.483 "target": "spare", 00:16:56.483 "progress": { 00:16:56.483 "blocks": 195840, 00:16:56.483 "percent": 99 00:16:56.483 } 00:16:56.483 }, 00:16:56.483 "base_bdevs_list": [ 00:16:56.483 { 00:16:56.483 "name": "spare", 00:16:56.483 "uuid": "239ce912-ff85-54f1-99d8-4edf6366455f", 00:16:56.483 "is_configured": true, 00:16:56.483 "data_offset": 0, 00:16:56.483 "data_size": 65536 00:16:56.483 }, 00:16:56.483 { 00:16:56.483 "name": "BaseBdev2", 00:16:56.483 "uuid": "fe353865-cabc-5187-8c41-29e1986dffc2", 00:16:56.483 "is_configured": true, 00:16:56.483 "data_offset": 0, 00:16:56.483 "data_size": 65536 00:16:56.483 }, 00:16:56.483 { 00:16:56.483 "name": "BaseBdev3", 00:16:56.483 "uuid": "90662dc4-6cc0-53ba-af43-cb0c22c90d93", 00:16:56.483 "is_configured": true, 00:16:56.483 "data_offset": 0, 00:16:56.483 "data_size": 65536 00:16:56.483 }, 00:16:56.483 { 00:16:56.483 "name": "BaseBdev4", 00:16:56.483 "uuid": "632904fd-26fa-5354-84ff-e82fca1b56d8", 00:16:56.483 "is_configured": true, 00:16:56.483 "data_offset": 0, 00:16:56.483 "data_size": 65536 00:16:56.483 } 00:16:56.483 ] 00:16:56.483 }' 00:16:56.483 15:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.483 [2024-11-25 15:43:55.048664] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:56.483 [2024-11-25 15:43:55.048783] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:56.483 [2024-11-25 15:43:55.048855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.483 15:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.483 15:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.483 15:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:56.483 15:43:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:57.864 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.864 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.864 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.864 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.864 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.864 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.864 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.864 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.864 15:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.864 15:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.864 15:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.864 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.864 "name": "raid_bdev1", 00:16:57.864 "uuid": "20823668-7973-4dad-a396-3e90193ccb8a", 00:16:57.864 "strip_size_kb": 64, 00:16:57.864 "state": "online", 00:16:57.864 "raid_level": "raid5f", 00:16:57.865 "superblock": false, 00:16:57.865 "num_base_bdevs": 4, 00:16:57.865 "num_base_bdevs_discovered": 4, 00:16:57.865 "num_base_bdevs_operational": 4, 00:16:57.865 "base_bdevs_list": [ 00:16:57.865 { 00:16:57.865 "name": "spare", 00:16:57.865 "uuid": "239ce912-ff85-54f1-99d8-4edf6366455f", 00:16:57.865 "is_configured": true, 00:16:57.865 "data_offset": 0, 00:16:57.865 "data_size": 65536 00:16:57.865 }, 00:16:57.865 { 00:16:57.865 "name": "BaseBdev2", 00:16:57.865 "uuid": "fe353865-cabc-5187-8c41-29e1986dffc2", 00:16:57.865 "is_configured": true, 00:16:57.865 "data_offset": 0, 00:16:57.865 "data_size": 65536 00:16:57.865 }, 00:16:57.865 { 00:16:57.865 "name": "BaseBdev3", 00:16:57.865 "uuid": "90662dc4-6cc0-53ba-af43-cb0c22c90d93", 00:16:57.865 "is_configured": true, 00:16:57.865 "data_offset": 0, 00:16:57.865 "data_size": 65536 00:16:57.865 }, 00:16:57.865 { 00:16:57.865 "name": "BaseBdev4", 00:16:57.865 "uuid": "632904fd-26fa-5354-84ff-e82fca1b56d8", 00:16:57.865 "is_configured": true, 00:16:57.865 "data_offset": 0, 00:16:57.865 "data_size": 65536 00:16:57.865 } 00:16:57.865 ] 00:16:57.865 }' 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.865 "name": "raid_bdev1", 00:16:57.865 "uuid": "20823668-7973-4dad-a396-3e90193ccb8a", 00:16:57.865 "strip_size_kb": 64, 00:16:57.865 "state": "online", 00:16:57.865 "raid_level": "raid5f", 00:16:57.865 "superblock": false, 00:16:57.865 "num_base_bdevs": 4, 00:16:57.865 "num_base_bdevs_discovered": 4, 00:16:57.865 "num_base_bdevs_operational": 4, 00:16:57.865 "base_bdevs_list": [ 00:16:57.865 { 00:16:57.865 "name": "spare", 00:16:57.865 "uuid": "239ce912-ff85-54f1-99d8-4edf6366455f", 00:16:57.865 "is_configured": true, 00:16:57.865 "data_offset": 0, 00:16:57.865 "data_size": 65536 00:16:57.865 }, 00:16:57.865 { 00:16:57.865 "name": "BaseBdev2", 00:16:57.865 "uuid": "fe353865-cabc-5187-8c41-29e1986dffc2", 00:16:57.865 "is_configured": true, 00:16:57.865 "data_offset": 0, 00:16:57.865 "data_size": 65536 00:16:57.865 }, 00:16:57.865 { 00:16:57.865 "name": "BaseBdev3", 00:16:57.865 "uuid": "90662dc4-6cc0-53ba-af43-cb0c22c90d93", 00:16:57.865 "is_configured": true, 00:16:57.865 "data_offset": 0, 00:16:57.865 "data_size": 65536 00:16:57.865 }, 00:16:57.865 { 00:16:57.865 "name": "BaseBdev4", 00:16:57.865 "uuid": "632904fd-26fa-5354-84ff-e82fca1b56d8", 00:16:57.865 "is_configured": true, 00:16:57.865 "data_offset": 0, 00:16:57.865 "data_size": 65536 00:16:57.865 } 00:16:57.865 ] 00:16:57.865 }' 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.865 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.865 "name": "raid_bdev1", 00:16:57.865 "uuid": "20823668-7973-4dad-a396-3e90193ccb8a", 00:16:57.865 "strip_size_kb": 64, 00:16:57.865 "state": "online", 00:16:57.865 "raid_level": "raid5f", 00:16:57.865 "superblock": false, 00:16:57.865 "num_base_bdevs": 4, 00:16:57.865 "num_base_bdevs_discovered": 4, 00:16:57.865 "num_base_bdevs_operational": 4, 00:16:57.865 "base_bdevs_list": [ 00:16:57.865 { 00:16:57.865 "name": "spare", 00:16:57.865 "uuid": "239ce912-ff85-54f1-99d8-4edf6366455f", 00:16:57.865 "is_configured": true, 00:16:57.865 "data_offset": 0, 00:16:57.865 "data_size": 65536 00:16:57.865 }, 00:16:57.865 { 00:16:57.865 "name": "BaseBdev2", 00:16:57.865 "uuid": "fe353865-cabc-5187-8c41-29e1986dffc2", 00:16:57.865 "is_configured": true, 00:16:57.865 "data_offset": 0, 00:16:57.865 "data_size": 65536 00:16:57.865 }, 00:16:57.865 { 00:16:57.865 "name": "BaseBdev3", 00:16:57.865 "uuid": "90662dc4-6cc0-53ba-af43-cb0c22c90d93", 00:16:57.865 "is_configured": true, 00:16:57.865 "data_offset": 0, 00:16:57.865 "data_size": 65536 00:16:57.866 }, 00:16:57.866 { 00:16:57.866 "name": "BaseBdev4", 00:16:57.866 "uuid": "632904fd-26fa-5354-84ff-e82fca1b56d8", 00:16:57.866 "is_configured": true, 00:16:57.866 "data_offset": 0, 00:16:57.866 "data_size": 65536 00:16:57.866 } 00:16:57.866 ] 00:16:57.866 }' 00:16:57.866 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.866 15:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.435 [2024-11-25 15:43:56.879825] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.435 [2024-11-25 15:43:56.879903] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.435 [2024-11-25 15:43:56.880016] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.435 [2024-11-25 15:43:56.880137] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.435 [2024-11-25 15:43:56.880185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:58.435 15:43:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:58.694 /dev/nbd0 00:16:58.694 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:58.694 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:58.694 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:58.694 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:58.694 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:58.694 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:58.694 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:58.694 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:58.694 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:58.694 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:58.694 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:58.694 1+0 records in 00:16:58.694 1+0 records out 00:16:58.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322666 s, 12.7 MB/s 00:16:58.694 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.694 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:58.694 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.694 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:58.694 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:58.694 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:58.694 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:58.694 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:58.694 /dev/nbd1 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:58.953 1+0 records in 00:16:58.953 1+0 records out 00:16:58.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448025 s, 9.1 MB/s 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:58.953 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:59.213 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:59.213 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:59.213 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:59.213 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:59.213 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:59.213 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:59.213 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:59.213 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:59.213 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:59.213 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:59.472 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:59.472 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:59.472 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:59.472 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:59.472 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:59.472 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:59.472 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:59.472 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:59.472 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:59.472 15:43:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84201 00:16:59.472 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84201 ']' 00:16:59.472 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84201 00:16:59.472 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:59.472 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.472 15:43:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84201 00:16:59.472 15:43:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.472 15:43:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.472 15:43:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84201' 00:16:59.472 killing process with pid 84201 00:16:59.472 Received shutdown signal, test time was about 60.000000 seconds 00:16:59.472 00:16:59.472 Latency(us) 00:16:59.472 [2024-11-25T15:43:58.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.472 [2024-11-25T15:43:58.153Z] =================================================================================================================== 00:16:59.472 [2024-11-25T15:43:58.153Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:59.472 15:43:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84201 00:16:59.472 [2024-11-25 15:43:58.021805] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:59.472 15:43:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84201 00:17:00.041 [2024-11-25 15:43:58.474938] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.979 15:43:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:00.979 00:17:00.979 real 0m19.874s 00:17:00.979 user 0m23.901s 00:17:00.979 sys 0m2.139s 00:17:00.979 15:43:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.979 ************************************ 00:17:00.979 END TEST raid5f_rebuild_test 00:17:00.979 ************************************ 00:17:00.979 15:43:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.979 15:43:59 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:00.979 15:43:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:00.979 15:43:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.979 15:43:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.979 ************************************ 00:17:00.979 START TEST raid5f_rebuild_test_sb 00:17:00.979 ************************************ 00:17:00.979 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:17:00.979 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:00.979 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:00.979 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:00.979 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:00.979 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:00.979 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:00.979 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.979 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:00.979 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:00.979 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.979 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:00.979 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:00.979 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84726 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84726 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84726 ']' 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.980 15:43:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.980 [2024-11-25 15:43:59.656268] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:17:00.980 [2024-11-25 15:43:59.656446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:00.980 Zero copy mechanism will not be used. 00:17:00.980 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84726 ] 00:17:01.240 [2024-11-25 15:43:59.815822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.499 [2024-11-25 15:43:59.924592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.499 [2024-11-25 15:44:00.106730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.499 [2024-11-25 15:44:00.106835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.070 BaseBdev1_malloc 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.070 [2024-11-25 15:44:00.512483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:02.070 [2024-11-25 15:44:00.512551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.070 [2024-11-25 15:44:00.512572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:02.070 [2024-11-25 15:44:00.512582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.070 [2024-11-25 15:44:00.514585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.070 [2024-11-25 15:44:00.514700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:02.070 BaseBdev1 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.070 BaseBdev2_malloc 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.070 [2024-11-25 15:44:00.566789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:02.070 [2024-11-25 15:44:00.566847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.070 [2024-11-25 15:44:00.566864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:02.070 [2024-11-25 15:44:00.566876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.070 [2024-11-25 15:44:00.568951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.070 [2024-11-25 15:44:00.568991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:02.070 BaseBdev2 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.070 BaseBdev3_malloc 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.070 [2024-11-25 15:44:00.652975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:02.070 [2024-11-25 15:44:00.653048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.070 [2024-11-25 15:44:00.653069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:02.070 [2024-11-25 15:44:00.653080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.070 [2024-11-25 15:44:00.655016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.070 [2024-11-25 15:44:00.655073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:02.070 BaseBdev3 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.070 BaseBdev4_malloc 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.070 [2024-11-25 15:44:00.706262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:02.070 [2024-11-25 15:44:00.706376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.070 [2024-11-25 15:44:00.706399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:02.070 [2024-11-25 15:44:00.706409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.070 [2024-11-25 15:44:00.708467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.070 [2024-11-25 15:44:00.708557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:02.070 BaseBdev4 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.070 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.331 spare_malloc 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.331 spare_delay 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.331 [2024-11-25 15:44:00.770554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:02.331 [2024-11-25 15:44:00.770606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.331 [2024-11-25 15:44:00.770640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:02.331 [2024-11-25 15:44:00.770649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.331 [2024-11-25 15:44:00.772607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.331 [2024-11-25 15:44:00.772686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:02.331 spare 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.331 [2024-11-25 15:44:00.782588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.331 [2024-11-25 15:44:00.784383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:02.331 [2024-11-25 15:44:00.784498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:02.331 [2024-11-25 15:44:00.784571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:02.331 [2024-11-25 15:44:00.784769] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:02.331 [2024-11-25 15:44:00.784820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:02.331 [2024-11-25 15:44:00.785069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:02.331 [2024-11-25 15:44:00.792427] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:02.331 [2024-11-25 15:44:00.792480] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:02.331 [2024-11-25 15:44:00.792740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.331 "name": "raid_bdev1", 00:17:02.331 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:02.331 "strip_size_kb": 64, 00:17:02.331 "state": "online", 00:17:02.331 "raid_level": "raid5f", 00:17:02.331 "superblock": true, 00:17:02.331 "num_base_bdevs": 4, 00:17:02.331 "num_base_bdevs_discovered": 4, 00:17:02.331 "num_base_bdevs_operational": 4, 00:17:02.331 "base_bdevs_list": [ 00:17:02.331 { 00:17:02.331 "name": "BaseBdev1", 00:17:02.331 "uuid": "f13978f0-6373-5d8b-85b6-9c6a5b816a65", 00:17:02.331 "is_configured": true, 00:17:02.331 "data_offset": 2048, 00:17:02.331 "data_size": 63488 00:17:02.331 }, 00:17:02.331 { 00:17:02.331 "name": "BaseBdev2", 00:17:02.331 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:02.331 "is_configured": true, 00:17:02.331 "data_offset": 2048, 00:17:02.331 "data_size": 63488 00:17:02.331 }, 00:17:02.331 { 00:17:02.331 "name": "BaseBdev3", 00:17:02.331 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:02.331 "is_configured": true, 00:17:02.331 "data_offset": 2048, 00:17:02.331 "data_size": 63488 00:17:02.331 }, 00:17:02.331 { 00:17:02.331 "name": "BaseBdev4", 00:17:02.331 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:02.331 "is_configured": true, 00:17:02.331 "data_offset": 2048, 00:17:02.331 "data_size": 63488 00:17:02.331 } 00:17:02.331 ] 00:17:02.331 }' 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.331 15:44:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.591 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:02.591 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.591 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.591 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:02.591 [2024-11-25 15:44:01.228236] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.591 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.591 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:02.591 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:02.850 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.850 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.850 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.850 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:02.851 [2024-11-25 15:44:01.459664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:02.851 /dev/nbd0 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.851 1+0 records in 00:17:02.851 1+0 records out 00:17:02.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539256 s, 7.6 MB/s 00:17:02.851 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.110 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:03.110 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.110 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:03.110 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:03.110 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:03.110 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:03.110 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:03.110 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:03.110 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:03.110 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:03.369 496+0 records in 00:17:03.369 496+0 records out 00:17:03.369 97517568 bytes (98 MB, 93 MiB) copied, 0.440903 s, 221 MB/s 00:17:03.369 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:03.369 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:03.369 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:03.369 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:03.369 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:03.369 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.369 15:44:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:03.629 [2024-11-25 15:44:02.191263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.629 [2024-11-25 15:44:02.209442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.629 "name": "raid_bdev1", 00:17:03.629 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:03.629 "strip_size_kb": 64, 00:17:03.629 "state": "online", 00:17:03.629 "raid_level": "raid5f", 00:17:03.629 "superblock": true, 00:17:03.629 "num_base_bdevs": 4, 00:17:03.629 "num_base_bdevs_discovered": 3, 00:17:03.629 "num_base_bdevs_operational": 3, 00:17:03.629 "base_bdevs_list": [ 00:17:03.629 { 00:17:03.629 "name": null, 00:17:03.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.629 "is_configured": false, 00:17:03.629 "data_offset": 0, 00:17:03.629 "data_size": 63488 00:17:03.629 }, 00:17:03.629 { 00:17:03.629 "name": "BaseBdev2", 00:17:03.629 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:03.629 "is_configured": true, 00:17:03.629 "data_offset": 2048, 00:17:03.629 "data_size": 63488 00:17:03.629 }, 00:17:03.629 { 00:17:03.629 "name": "BaseBdev3", 00:17:03.629 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:03.629 "is_configured": true, 00:17:03.629 "data_offset": 2048, 00:17:03.629 "data_size": 63488 00:17:03.629 }, 00:17:03.629 { 00:17:03.629 "name": "BaseBdev4", 00:17:03.629 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:03.629 "is_configured": true, 00:17:03.629 "data_offset": 2048, 00:17:03.629 "data_size": 63488 00:17:03.629 } 00:17:03.629 ] 00:17:03.629 }' 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.629 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.200 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:04.200 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.200 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.200 [2024-11-25 15:44:02.628698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:04.200 [2024-11-25 15:44:02.645711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:04.200 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.200 15:44:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:04.200 [2024-11-25 15:44:02.654915] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:05.140 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.140 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.140 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.140 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.140 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.140 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.141 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.141 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.141 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.141 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.141 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.141 "name": "raid_bdev1", 00:17:05.141 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:05.141 "strip_size_kb": 64, 00:17:05.141 "state": "online", 00:17:05.141 "raid_level": "raid5f", 00:17:05.141 "superblock": true, 00:17:05.141 "num_base_bdevs": 4, 00:17:05.141 "num_base_bdevs_discovered": 4, 00:17:05.141 "num_base_bdevs_operational": 4, 00:17:05.141 "process": { 00:17:05.141 "type": "rebuild", 00:17:05.141 "target": "spare", 00:17:05.141 "progress": { 00:17:05.141 "blocks": 19200, 00:17:05.141 "percent": 10 00:17:05.141 } 00:17:05.141 }, 00:17:05.141 "base_bdevs_list": [ 00:17:05.141 { 00:17:05.141 "name": "spare", 00:17:05.141 "uuid": "aaf1c91b-f8bd-5be3-945d-4796446f6951", 00:17:05.141 "is_configured": true, 00:17:05.141 "data_offset": 2048, 00:17:05.141 "data_size": 63488 00:17:05.141 }, 00:17:05.141 { 00:17:05.141 "name": "BaseBdev2", 00:17:05.141 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:05.141 "is_configured": true, 00:17:05.141 "data_offset": 2048, 00:17:05.141 "data_size": 63488 00:17:05.141 }, 00:17:05.141 { 00:17:05.141 "name": "BaseBdev3", 00:17:05.141 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:05.141 "is_configured": true, 00:17:05.141 "data_offset": 2048, 00:17:05.141 "data_size": 63488 00:17:05.141 }, 00:17:05.141 { 00:17:05.141 "name": "BaseBdev4", 00:17:05.141 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:05.141 "is_configured": true, 00:17:05.141 "data_offset": 2048, 00:17:05.141 "data_size": 63488 00:17:05.141 } 00:17:05.141 ] 00:17:05.141 }' 00:17:05.141 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.141 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.141 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.141 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.141 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:05.141 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.141 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.141 [2024-11-25 15:44:03.805497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:05.401 [2024-11-25 15:44:03.860534] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:05.401 [2024-11-25 15:44:03.860647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.401 [2024-11-25 15:44:03.860664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:05.401 [2024-11-25 15:44:03.860673] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:05.401 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.401 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:05.401 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.402 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.402 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.402 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.402 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.402 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.402 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.402 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.402 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.402 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.402 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.402 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.402 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.402 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.402 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.402 "name": "raid_bdev1", 00:17:05.402 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:05.402 "strip_size_kb": 64, 00:17:05.402 "state": "online", 00:17:05.402 "raid_level": "raid5f", 00:17:05.402 "superblock": true, 00:17:05.402 "num_base_bdevs": 4, 00:17:05.402 "num_base_bdevs_discovered": 3, 00:17:05.402 "num_base_bdevs_operational": 3, 00:17:05.402 "base_bdevs_list": [ 00:17:05.402 { 00:17:05.402 "name": null, 00:17:05.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.402 "is_configured": false, 00:17:05.402 "data_offset": 0, 00:17:05.402 "data_size": 63488 00:17:05.402 }, 00:17:05.402 { 00:17:05.402 "name": "BaseBdev2", 00:17:05.402 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:05.402 "is_configured": true, 00:17:05.402 "data_offset": 2048, 00:17:05.402 "data_size": 63488 00:17:05.402 }, 00:17:05.402 { 00:17:05.402 "name": "BaseBdev3", 00:17:05.402 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:05.402 "is_configured": true, 00:17:05.402 "data_offset": 2048, 00:17:05.402 "data_size": 63488 00:17:05.402 }, 00:17:05.402 { 00:17:05.402 "name": "BaseBdev4", 00:17:05.402 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:05.402 "is_configured": true, 00:17:05.402 "data_offset": 2048, 00:17:05.402 "data_size": 63488 00:17:05.402 } 00:17:05.402 ] 00:17:05.402 }' 00:17:05.402 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.402 15:44:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.662 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:05.662 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.662 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:05.662 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:05.662 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.662 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.662 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.662 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.662 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.922 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.922 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.922 "name": "raid_bdev1", 00:17:05.922 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:05.922 "strip_size_kb": 64, 00:17:05.922 "state": "online", 00:17:05.922 "raid_level": "raid5f", 00:17:05.922 "superblock": true, 00:17:05.922 "num_base_bdevs": 4, 00:17:05.922 "num_base_bdevs_discovered": 3, 00:17:05.922 "num_base_bdevs_operational": 3, 00:17:05.922 "base_bdevs_list": [ 00:17:05.922 { 00:17:05.922 "name": null, 00:17:05.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.922 "is_configured": false, 00:17:05.922 "data_offset": 0, 00:17:05.922 "data_size": 63488 00:17:05.922 }, 00:17:05.922 { 00:17:05.922 "name": "BaseBdev2", 00:17:05.922 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:05.922 "is_configured": true, 00:17:05.922 "data_offset": 2048, 00:17:05.922 "data_size": 63488 00:17:05.922 }, 00:17:05.922 { 00:17:05.922 "name": "BaseBdev3", 00:17:05.922 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:05.922 "is_configured": true, 00:17:05.922 "data_offset": 2048, 00:17:05.922 "data_size": 63488 00:17:05.922 }, 00:17:05.922 { 00:17:05.922 "name": "BaseBdev4", 00:17:05.923 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:05.923 "is_configured": true, 00:17:05.923 "data_offset": 2048, 00:17:05.923 "data_size": 63488 00:17:05.923 } 00:17:05.923 ] 00:17:05.923 }' 00:17:05.923 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.923 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:05.923 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.923 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:05.923 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:05.923 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.923 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.923 [2024-11-25 15:44:04.484363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.923 [2024-11-25 15:44:04.499524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:05.923 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.923 15:44:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:05.923 [2024-11-25 15:44:04.508356] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:06.868 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.868 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.868 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.868 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.868 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.868 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.868 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.868 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.868 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.868 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.203 "name": "raid_bdev1", 00:17:07.203 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:07.203 "strip_size_kb": 64, 00:17:07.203 "state": "online", 00:17:07.203 "raid_level": "raid5f", 00:17:07.203 "superblock": true, 00:17:07.203 "num_base_bdevs": 4, 00:17:07.203 "num_base_bdevs_discovered": 4, 00:17:07.203 "num_base_bdevs_operational": 4, 00:17:07.203 "process": { 00:17:07.203 "type": "rebuild", 00:17:07.203 "target": "spare", 00:17:07.203 "progress": { 00:17:07.203 "blocks": 19200, 00:17:07.203 "percent": 10 00:17:07.203 } 00:17:07.203 }, 00:17:07.203 "base_bdevs_list": [ 00:17:07.203 { 00:17:07.203 "name": "spare", 00:17:07.203 "uuid": "aaf1c91b-f8bd-5be3-945d-4796446f6951", 00:17:07.203 "is_configured": true, 00:17:07.203 "data_offset": 2048, 00:17:07.203 "data_size": 63488 00:17:07.203 }, 00:17:07.203 { 00:17:07.203 "name": "BaseBdev2", 00:17:07.203 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:07.203 "is_configured": true, 00:17:07.203 "data_offset": 2048, 00:17:07.203 "data_size": 63488 00:17:07.203 }, 00:17:07.203 { 00:17:07.203 "name": "BaseBdev3", 00:17:07.203 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:07.203 "is_configured": true, 00:17:07.203 "data_offset": 2048, 00:17:07.203 "data_size": 63488 00:17:07.203 }, 00:17:07.203 { 00:17:07.203 "name": "BaseBdev4", 00:17:07.203 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:07.203 "is_configured": true, 00:17:07.203 "data_offset": 2048, 00:17:07.203 "data_size": 63488 00:17:07.203 } 00:17:07.203 ] 00:17:07.203 }' 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:07.203 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=618 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.203 "name": "raid_bdev1", 00:17:07.203 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:07.203 "strip_size_kb": 64, 00:17:07.203 "state": "online", 00:17:07.203 "raid_level": "raid5f", 00:17:07.203 "superblock": true, 00:17:07.203 "num_base_bdevs": 4, 00:17:07.203 "num_base_bdevs_discovered": 4, 00:17:07.203 "num_base_bdevs_operational": 4, 00:17:07.203 "process": { 00:17:07.203 "type": "rebuild", 00:17:07.203 "target": "spare", 00:17:07.203 "progress": { 00:17:07.203 "blocks": 21120, 00:17:07.203 "percent": 11 00:17:07.203 } 00:17:07.203 }, 00:17:07.203 "base_bdevs_list": [ 00:17:07.203 { 00:17:07.203 "name": "spare", 00:17:07.203 "uuid": "aaf1c91b-f8bd-5be3-945d-4796446f6951", 00:17:07.203 "is_configured": true, 00:17:07.203 "data_offset": 2048, 00:17:07.203 "data_size": 63488 00:17:07.203 }, 00:17:07.203 { 00:17:07.203 "name": "BaseBdev2", 00:17:07.203 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:07.203 "is_configured": true, 00:17:07.203 "data_offset": 2048, 00:17:07.203 "data_size": 63488 00:17:07.203 }, 00:17:07.203 { 00:17:07.203 "name": "BaseBdev3", 00:17:07.203 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:07.203 "is_configured": true, 00:17:07.203 "data_offset": 2048, 00:17:07.203 "data_size": 63488 00:17:07.203 }, 00:17:07.203 { 00:17:07.203 "name": "BaseBdev4", 00:17:07.203 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:07.203 "is_configured": true, 00:17:07.203 "data_offset": 2048, 00:17:07.203 "data_size": 63488 00:17:07.203 } 00:17:07.203 ] 00:17:07.203 }' 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.203 15:44:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:08.142 15:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:08.142 15:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.142 15:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.142 15:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.142 15:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.142 15:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.142 15:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.142 15:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.142 15:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.142 15:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.402 15:44:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.402 15:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.402 "name": "raid_bdev1", 00:17:08.402 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:08.402 "strip_size_kb": 64, 00:17:08.402 "state": "online", 00:17:08.402 "raid_level": "raid5f", 00:17:08.402 "superblock": true, 00:17:08.402 "num_base_bdevs": 4, 00:17:08.402 "num_base_bdevs_discovered": 4, 00:17:08.402 "num_base_bdevs_operational": 4, 00:17:08.402 "process": { 00:17:08.402 "type": "rebuild", 00:17:08.402 "target": "spare", 00:17:08.402 "progress": { 00:17:08.402 "blocks": 42240, 00:17:08.402 "percent": 22 00:17:08.402 } 00:17:08.402 }, 00:17:08.402 "base_bdevs_list": [ 00:17:08.402 { 00:17:08.402 "name": "spare", 00:17:08.402 "uuid": "aaf1c91b-f8bd-5be3-945d-4796446f6951", 00:17:08.402 "is_configured": true, 00:17:08.402 "data_offset": 2048, 00:17:08.402 "data_size": 63488 00:17:08.402 }, 00:17:08.402 { 00:17:08.402 "name": "BaseBdev2", 00:17:08.402 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:08.402 "is_configured": true, 00:17:08.402 "data_offset": 2048, 00:17:08.402 "data_size": 63488 00:17:08.402 }, 00:17:08.402 { 00:17:08.402 "name": "BaseBdev3", 00:17:08.402 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:08.402 "is_configured": true, 00:17:08.402 "data_offset": 2048, 00:17:08.402 "data_size": 63488 00:17:08.402 }, 00:17:08.402 { 00:17:08.402 "name": "BaseBdev4", 00:17:08.402 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:08.402 "is_configured": true, 00:17:08.402 "data_offset": 2048, 00:17:08.402 "data_size": 63488 00:17:08.402 } 00:17:08.402 ] 00:17:08.402 }' 00:17:08.402 15:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.402 15:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.402 15:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.402 15:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.402 15:44:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:09.342 15:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.342 15:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.342 15:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.342 15:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.342 15:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.342 15:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.342 15:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.342 15:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.342 15:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.342 15:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.342 15:44:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.342 15:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.342 "name": "raid_bdev1", 00:17:09.342 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:09.342 "strip_size_kb": 64, 00:17:09.342 "state": "online", 00:17:09.342 "raid_level": "raid5f", 00:17:09.342 "superblock": true, 00:17:09.342 "num_base_bdevs": 4, 00:17:09.342 "num_base_bdevs_discovered": 4, 00:17:09.342 "num_base_bdevs_operational": 4, 00:17:09.342 "process": { 00:17:09.342 "type": "rebuild", 00:17:09.342 "target": "spare", 00:17:09.342 "progress": { 00:17:09.342 "blocks": 65280, 00:17:09.343 "percent": 34 00:17:09.343 } 00:17:09.343 }, 00:17:09.343 "base_bdevs_list": [ 00:17:09.343 { 00:17:09.343 "name": "spare", 00:17:09.343 "uuid": "aaf1c91b-f8bd-5be3-945d-4796446f6951", 00:17:09.343 "is_configured": true, 00:17:09.343 "data_offset": 2048, 00:17:09.343 "data_size": 63488 00:17:09.343 }, 00:17:09.343 { 00:17:09.343 "name": "BaseBdev2", 00:17:09.343 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:09.343 "is_configured": true, 00:17:09.343 "data_offset": 2048, 00:17:09.343 "data_size": 63488 00:17:09.343 }, 00:17:09.343 { 00:17:09.343 "name": "BaseBdev3", 00:17:09.343 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:09.343 "is_configured": true, 00:17:09.343 "data_offset": 2048, 00:17:09.343 "data_size": 63488 00:17:09.343 }, 00:17:09.343 { 00:17:09.343 "name": "BaseBdev4", 00:17:09.343 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:09.343 "is_configured": true, 00:17:09.343 "data_offset": 2048, 00:17:09.343 "data_size": 63488 00:17:09.343 } 00:17:09.343 ] 00:17:09.343 }' 00:17:09.343 15:44:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.343 15:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.343 15:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.603 15:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.603 15:44:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:10.543 15:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.543 15:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.543 15:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.543 15:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.543 15:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.543 15:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.543 15:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.543 15:44:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.543 15:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.543 15:44:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.543 15:44:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.543 15:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.543 "name": "raid_bdev1", 00:17:10.543 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:10.543 "strip_size_kb": 64, 00:17:10.543 "state": "online", 00:17:10.544 "raid_level": "raid5f", 00:17:10.544 "superblock": true, 00:17:10.544 "num_base_bdevs": 4, 00:17:10.544 "num_base_bdevs_discovered": 4, 00:17:10.544 "num_base_bdevs_operational": 4, 00:17:10.544 "process": { 00:17:10.544 "type": "rebuild", 00:17:10.544 "target": "spare", 00:17:10.544 "progress": { 00:17:10.544 "blocks": 86400, 00:17:10.544 "percent": 45 00:17:10.544 } 00:17:10.544 }, 00:17:10.544 "base_bdevs_list": [ 00:17:10.544 { 00:17:10.544 "name": "spare", 00:17:10.544 "uuid": "aaf1c91b-f8bd-5be3-945d-4796446f6951", 00:17:10.544 "is_configured": true, 00:17:10.544 "data_offset": 2048, 00:17:10.544 "data_size": 63488 00:17:10.544 }, 00:17:10.544 { 00:17:10.544 "name": "BaseBdev2", 00:17:10.544 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:10.544 "is_configured": true, 00:17:10.544 "data_offset": 2048, 00:17:10.544 "data_size": 63488 00:17:10.544 }, 00:17:10.544 { 00:17:10.544 "name": "BaseBdev3", 00:17:10.544 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:10.544 "is_configured": true, 00:17:10.544 "data_offset": 2048, 00:17:10.544 "data_size": 63488 00:17:10.544 }, 00:17:10.544 { 00:17:10.544 "name": "BaseBdev4", 00:17:10.544 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:10.544 "is_configured": true, 00:17:10.544 "data_offset": 2048, 00:17:10.544 "data_size": 63488 00:17:10.544 } 00:17:10.544 ] 00:17:10.544 }' 00:17:10.544 15:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.544 15:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.544 15:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.544 15:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.544 15:44:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:11.959 15:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.959 15:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.959 15:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.959 15:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.959 15:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.959 15:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.959 15:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.959 15:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.959 15:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.959 15:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.959 15:44:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.959 15:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.959 "name": "raid_bdev1", 00:17:11.959 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:11.959 "strip_size_kb": 64, 00:17:11.959 "state": "online", 00:17:11.959 "raid_level": "raid5f", 00:17:11.959 "superblock": true, 00:17:11.959 "num_base_bdevs": 4, 00:17:11.959 "num_base_bdevs_discovered": 4, 00:17:11.959 "num_base_bdevs_operational": 4, 00:17:11.959 "process": { 00:17:11.959 "type": "rebuild", 00:17:11.959 "target": "spare", 00:17:11.959 "progress": { 00:17:11.959 "blocks": 109440, 00:17:11.959 "percent": 57 00:17:11.959 } 00:17:11.959 }, 00:17:11.959 "base_bdevs_list": [ 00:17:11.959 { 00:17:11.959 "name": "spare", 00:17:11.959 "uuid": "aaf1c91b-f8bd-5be3-945d-4796446f6951", 00:17:11.959 "is_configured": true, 00:17:11.959 "data_offset": 2048, 00:17:11.959 "data_size": 63488 00:17:11.959 }, 00:17:11.959 { 00:17:11.959 "name": "BaseBdev2", 00:17:11.959 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:11.959 "is_configured": true, 00:17:11.959 "data_offset": 2048, 00:17:11.959 "data_size": 63488 00:17:11.959 }, 00:17:11.959 { 00:17:11.959 "name": "BaseBdev3", 00:17:11.959 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:11.959 "is_configured": true, 00:17:11.959 "data_offset": 2048, 00:17:11.959 "data_size": 63488 00:17:11.959 }, 00:17:11.959 { 00:17:11.959 "name": "BaseBdev4", 00:17:11.959 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:11.959 "is_configured": true, 00:17:11.959 "data_offset": 2048, 00:17:11.959 "data_size": 63488 00:17:11.959 } 00:17:11.959 ] 00:17:11.959 }' 00:17:11.959 15:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.959 15:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.959 15:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.959 15:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.959 15:44:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:12.900 15:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.900 15:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.900 15:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.900 15:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.900 15:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.900 15:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.900 15:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.900 15:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.900 15:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.900 15:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.900 15:44:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.900 15:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.900 "name": "raid_bdev1", 00:17:12.900 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:12.900 "strip_size_kb": 64, 00:17:12.900 "state": "online", 00:17:12.900 "raid_level": "raid5f", 00:17:12.900 "superblock": true, 00:17:12.900 "num_base_bdevs": 4, 00:17:12.900 "num_base_bdevs_discovered": 4, 00:17:12.900 "num_base_bdevs_operational": 4, 00:17:12.900 "process": { 00:17:12.900 "type": "rebuild", 00:17:12.900 "target": "spare", 00:17:12.900 "progress": { 00:17:12.900 "blocks": 130560, 00:17:12.900 "percent": 68 00:17:12.900 } 00:17:12.900 }, 00:17:12.900 "base_bdevs_list": [ 00:17:12.900 { 00:17:12.900 "name": "spare", 00:17:12.900 "uuid": "aaf1c91b-f8bd-5be3-945d-4796446f6951", 00:17:12.900 "is_configured": true, 00:17:12.900 "data_offset": 2048, 00:17:12.900 "data_size": 63488 00:17:12.900 }, 00:17:12.900 { 00:17:12.900 "name": "BaseBdev2", 00:17:12.900 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:12.900 "is_configured": true, 00:17:12.900 "data_offset": 2048, 00:17:12.900 "data_size": 63488 00:17:12.900 }, 00:17:12.900 { 00:17:12.900 "name": "BaseBdev3", 00:17:12.900 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:12.900 "is_configured": true, 00:17:12.900 "data_offset": 2048, 00:17:12.900 "data_size": 63488 00:17:12.900 }, 00:17:12.900 { 00:17:12.900 "name": "BaseBdev4", 00:17:12.900 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:12.900 "is_configured": true, 00:17:12.900 "data_offset": 2048, 00:17:12.900 "data_size": 63488 00:17:12.900 } 00:17:12.900 ] 00:17:12.900 }' 00:17:12.900 15:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.900 15:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.900 15:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.900 15:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.900 15:44:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:13.841 15:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.841 15:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.841 15:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.841 15:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.841 15:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.841 15:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.841 15:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.841 15:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.841 15:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.841 15:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.841 15:44:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.102 15:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.102 "name": "raid_bdev1", 00:17:14.102 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:14.102 "strip_size_kb": 64, 00:17:14.102 "state": "online", 00:17:14.102 "raid_level": "raid5f", 00:17:14.102 "superblock": true, 00:17:14.102 "num_base_bdevs": 4, 00:17:14.102 "num_base_bdevs_discovered": 4, 00:17:14.102 "num_base_bdevs_operational": 4, 00:17:14.102 "process": { 00:17:14.102 "type": "rebuild", 00:17:14.102 "target": "spare", 00:17:14.102 "progress": { 00:17:14.102 "blocks": 151680, 00:17:14.102 "percent": 79 00:17:14.102 } 00:17:14.102 }, 00:17:14.102 "base_bdevs_list": [ 00:17:14.102 { 00:17:14.102 "name": "spare", 00:17:14.102 "uuid": "aaf1c91b-f8bd-5be3-945d-4796446f6951", 00:17:14.102 "is_configured": true, 00:17:14.102 "data_offset": 2048, 00:17:14.102 "data_size": 63488 00:17:14.102 }, 00:17:14.102 { 00:17:14.102 "name": "BaseBdev2", 00:17:14.102 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:14.102 "is_configured": true, 00:17:14.102 "data_offset": 2048, 00:17:14.102 "data_size": 63488 00:17:14.102 }, 00:17:14.102 { 00:17:14.102 "name": "BaseBdev3", 00:17:14.102 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:14.102 "is_configured": true, 00:17:14.102 "data_offset": 2048, 00:17:14.102 "data_size": 63488 00:17:14.102 }, 00:17:14.102 { 00:17:14.102 "name": "BaseBdev4", 00:17:14.102 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:14.102 "is_configured": true, 00:17:14.102 "data_offset": 2048, 00:17:14.102 "data_size": 63488 00:17:14.102 } 00:17:14.102 ] 00:17:14.102 }' 00:17:14.102 15:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.102 15:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.102 15:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.102 15:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.102 15:44:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:15.043 15:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.043 15:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.043 15:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.043 15:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.043 15:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.043 15:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.043 15:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.043 15:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.043 15:44:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.043 15:44:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.043 15:44:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.043 15:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.043 "name": "raid_bdev1", 00:17:15.043 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:15.043 "strip_size_kb": 64, 00:17:15.043 "state": "online", 00:17:15.043 "raid_level": "raid5f", 00:17:15.043 "superblock": true, 00:17:15.043 "num_base_bdevs": 4, 00:17:15.043 "num_base_bdevs_discovered": 4, 00:17:15.043 "num_base_bdevs_operational": 4, 00:17:15.043 "process": { 00:17:15.043 "type": "rebuild", 00:17:15.043 "target": "spare", 00:17:15.043 "progress": { 00:17:15.043 "blocks": 174720, 00:17:15.043 "percent": 91 00:17:15.043 } 00:17:15.043 }, 00:17:15.043 "base_bdevs_list": [ 00:17:15.043 { 00:17:15.043 "name": "spare", 00:17:15.043 "uuid": "aaf1c91b-f8bd-5be3-945d-4796446f6951", 00:17:15.043 "is_configured": true, 00:17:15.043 "data_offset": 2048, 00:17:15.043 "data_size": 63488 00:17:15.043 }, 00:17:15.043 { 00:17:15.043 "name": "BaseBdev2", 00:17:15.043 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:15.043 "is_configured": true, 00:17:15.043 "data_offset": 2048, 00:17:15.043 "data_size": 63488 00:17:15.043 }, 00:17:15.043 { 00:17:15.043 "name": "BaseBdev3", 00:17:15.043 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:15.043 "is_configured": true, 00:17:15.043 "data_offset": 2048, 00:17:15.043 "data_size": 63488 00:17:15.043 }, 00:17:15.043 { 00:17:15.043 "name": "BaseBdev4", 00:17:15.043 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:15.043 "is_configured": true, 00:17:15.043 "data_offset": 2048, 00:17:15.043 "data_size": 63488 00:17:15.043 } 00:17:15.043 ] 00:17:15.043 }' 00:17:15.043 15:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.303 15:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.303 15:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.303 15:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.303 15:44:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:16.240 [2024-11-25 15:44:14.552682] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:16.240 [2024-11-25 15:44:14.552806] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:16.240 [2024-11-25 15:44:14.552968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.240 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.240 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.240 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.240 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.240 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.240 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.240 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.240 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.240 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.240 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.240 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.240 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.240 "name": "raid_bdev1", 00:17:16.240 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:16.240 "strip_size_kb": 64, 00:17:16.240 "state": "online", 00:17:16.240 "raid_level": "raid5f", 00:17:16.240 "superblock": true, 00:17:16.240 "num_base_bdevs": 4, 00:17:16.240 "num_base_bdevs_discovered": 4, 00:17:16.240 "num_base_bdevs_operational": 4, 00:17:16.240 "base_bdevs_list": [ 00:17:16.240 { 00:17:16.240 "name": "spare", 00:17:16.240 "uuid": "aaf1c91b-f8bd-5be3-945d-4796446f6951", 00:17:16.240 "is_configured": true, 00:17:16.240 "data_offset": 2048, 00:17:16.240 "data_size": 63488 00:17:16.240 }, 00:17:16.240 { 00:17:16.240 "name": "BaseBdev2", 00:17:16.240 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:16.240 "is_configured": true, 00:17:16.240 "data_offset": 2048, 00:17:16.241 "data_size": 63488 00:17:16.241 }, 00:17:16.241 { 00:17:16.241 "name": "BaseBdev3", 00:17:16.241 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:16.241 "is_configured": true, 00:17:16.241 "data_offset": 2048, 00:17:16.241 "data_size": 63488 00:17:16.241 }, 00:17:16.241 { 00:17:16.241 "name": "BaseBdev4", 00:17:16.241 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:16.241 "is_configured": true, 00:17:16.241 "data_offset": 2048, 00:17:16.241 "data_size": 63488 00:17:16.241 } 00:17:16.241 ] 00:17:16.241 }' 00:17:16.241 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.241 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:16.241 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.501 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:16.501 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:16.501 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:16.501 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.501 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:16.501 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:16.501 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.501 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.501 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.501 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.501 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.501 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.501 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.501 "name": "raid_bdev1", 00:17:16.501 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:16.501 "strip_size_kb": 64, 00:17:16.501 "state": "online", 00:17:16.501 "raid_level": "raid5f", 00:17:16.501 "superblock": true, 00:17:16.501 "num_base_bdevs": 4, 00:17:16.501 "num_base_bdevs_discovered": 4, 00:17:16.501 "num_base_bdevs_operational": 4, 00:17:16.501 "base_bdevs_list": [ 00:17:16.501 { 00:17:16.501 "name": "spare", 00:17:16.501 "uuid": "aaf1c91b-f8bd-5be3-945d-4796446f6951", 00:17:16.501 "is_configured": true, 00:17:16.501 "data_offset": 2048, 00:17:16.501 "data_size": 63488 00:17:16.501 }, 00:17:16.501 { 00:17:16.501 "name": "BaseBdev2", 00:17:16.501 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:16.501 "is_configured": true, 00:17:16.501 "data_offset": 2048, 00:17:16.501 "data_size": 63488 00:17:16.501 }, 00:17:16.501 { 00:17:16.501 "name": "BaseBdev3", 00:17:16.501 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:16.501 "is_configured": true, 00:17:16.501 "data_offset": 2048, 00:17:16.501 "data_size": 63488 00:17:16.501 }, 00:17:16.501 { 00:17:16.501 "name": "BaseBdev4", 00:17:16.501 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:16.501 "is_configured": true, 00:17:16.501 "data_offset": 2048, 00:17:16.501 "data_size": 63488 00:17:16.501 } 00:17:16.501 ] 00:17:16.501 }' 00:17:16.501 15:44:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.501 "name": "raid_bdev1", 00:17:16.501 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:16.501 "strip_size_kb": 64, 00:17:16.501 "state": "online", 00:17:16.501 "raid_level": "raid5f", 00:17:16.501 "superblock": true, 00:17:16.501 "num_base_bdevs": 4, 00:17:16.501 "num_base_bdevs_discovered": 4, 00:17:16.501 "num_base_bdevs_operational": 4, 00:17:16.501 "base_bdevs_list": [ 00:17:16.501 { 00:17:16.501 "name": "spare", 00:17:16.501 "uuid": "aaf1c91b-f8bd-5be3-945d-4796446f6951", 00:17:16.501 "is_configured": true, 00:17:16.501 "data_offset": 2048, 00:17:16.501 "data_size": 63488 00:17:16.501 }, 00:17:16.501 { 00:17:16.501 "name": "BaseBdev2", 00:17:16.501 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:16.501 "is_configured": true, 00:17:16.501 "data_offset": 2048, 00:17:16.501 "data_size": 63488 00:17:16.501 }, 00:17:16.501 { 00:17:16.501 "name": "BaseBdev3", 00:17:16.501 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:16.501 "is_configured": true, 00:17:16.501 "data_offset": 2048, 00:17:16.501 "data_size": 63488 00:17:16.501 }, 00:17:16.501 { 00:17:16.501 "name": "BaseBdev4", 00:17:16.501 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:16.501 "is_configured": true, 00:17:16.501 "data_offset": 2048, 00:17:16.501 "data_size": 63488 00:17:16.501 } 00:17:16.501 ] 00:17:16.501 }' 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.501 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.072 [2024-11-25 15:44:15.510244] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:17.072 [2024-11-25 15:44:15.510326] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.072 [2024-11-25 15:44:15.510434] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.072 [2024-11-25 15:44:15.510554] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.072 [2024-11-25 15:44:15.510610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:17.072 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:17.332 /dev/nbd0 00:17:17.332 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:17.332 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:17.332 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:17.332 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:17.332 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:17.332 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:17.332 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:17.332 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:17.332 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:17.332 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:17.332 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.332 1+0 records in 00:17:17.332 1+0 records out 00:17:17.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439026 s, 9.3 MB/s 00:17:17.332 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.332 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:17.332 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.332 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:17.332 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:17.332 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.332 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:17.332 15:44:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:17.332 /dev/nbd1 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.593 1+0 records in 00:17:17.593 1+0 records out 00:17:17.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353813 s, 11.6 MB/s 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:17.593 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:17.853 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:17.853 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:17.853 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:17.853 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:17.853 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:17.853 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:17.853 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:17.853 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:17.853 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:17.853 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:18.112 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:18.112 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:18.112 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:18.112 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:18.112 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:18.112 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:18.112 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:18.112 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:18.112 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:18.112 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:18.112 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.112 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.112 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.112 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:18.112 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.112 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.112 [2024-11-25 15:44:16.673434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:18.112 [2024-11-25 15:44:16.673493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.112 [2024-11-25 15:44:16.673519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:18.112 [2024-11-25 15:44:16.673528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.112 [2024-11-25 15:44:16.675638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.112 [2024-11-25 15:44:16.675727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:18.113 [2024-11-25 15:44:16.675824] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:18.113 [2024-11-25 15:44:16.675878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:18.113 [2024-11-25 15:44:16.676038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:18.113 [2024-11-25 15:44:16.676130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:18.113 [2024-11-25 15:44:16.676205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:18.113 spare 00:17:18.113 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.113 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:18.113 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.113 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.113 [2024-11-25 15:44:16.776109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:18.113 [2024-11-25 15:44:16.776135] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:18.113 [2024-11-25 15:44:16.776371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:18.113 [2024-11-25 15:44:16.782798] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:18.113 [2024-11-25 15:44:16.782817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:18.113 [2024-11-25 15:44:16.782974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.113 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.372 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:18.373 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.373 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.373 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.373 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.373 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:18.373 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.373 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.373 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.373 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.373 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.373 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.373 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.373 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.373 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.373 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.373 "name": "raid_bdev1", 00:17:18.373 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:18.373 "strip_size_kb": 64, 00:17:18.373 "state": "online", 00:17:18.373 "raid_level": "raid5f", 00:17:18.373 "superblock": true, 00:17:18.373 "num_base_bdevs": 4, 00:17:18.373 "num_base_bdevs_discovered": 4, 00:17:18.373 "num_base_bdevs_operational": 4, 00:17:18.373 "base_bdevs_list": [ 00:17:18.373 { 00:17:18.373 "name": "spare", 00:17:18.373 "uuid": "aaf1c91b-f8bd-5be3-945d-4796446f6951", 00:17:18.373 "is_configured": true, 00:17:18.373 "data_offset": 2048, 00:17:18.373 "data_size": 63488 00:17:18.373 }, 00:17:18.373 { 00:17:18.373 "name": "BaseBdev2", 00:17:18.373 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:18.373 "is_configured": true, 00:17:18.373 "data_offset": 2048, 00:17:18.373 "data_size": 63488 00:17:18.373 }, 00:17:18.373 { 00:17:18.373 "name": "BaseBdev3", 00:17:18.373 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:18.373 "is_configured": true, 00:17:18.373 "data_offset": 2048, 00:17:18.373 "data_size": 63488 00:17:18.373 }, 00:17:18.373 { 00:17:18.373 "name": "BaseBdev4", 00:17:18.373 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:18.373 "is_configured": true, 00:17:18.373 "data_offset": 2048, 00:17:18.373 "data_size": 63488 00:17:18.373 } 00:17:18.373 ] 00:17:18.373 }' 00:17:18.373 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.373 15:44:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.633 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.633 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.633 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.633 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.633 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.633 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.633 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.633 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.633 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.633 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.633 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.633 "name": "raid_bdev1", 00:17:18.633 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:18.633 "strip_size_kb": 64, 00:17:18.633 "state": "online", 00:17:18.633 "raid_level": "raid5f", 00:17:18.633 "superblock": true, 00:17:18.633 "num_base_bdevs": 4, 00:17:18.633 "num_base_bdevs_discovered": 4, 00:17:18.633 "num_base_bdevs_operational": 4, 00:17:18.633 "base_bdevs_list": [ 00:17:18.633 { 00:17:18.633 "name": "spare", 00:17:18.633 "uuid": "aaf1c91b-f8bd-5be3-945d-4796446f6951", 00:17:18.633 "is_configured": true, 00:17:18.633 "data_offset": 2048, 00:17:18.633 "data_size": 63488 00:17:18.633 }, 00:17:18.633 { 00:17:18.633 "name": "BaseBdev2", 00:17:18.633 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:18.633 "is_configured": true, 00:17:18.633 "data_offset": 2048, 00:17:18.633 "data_size": 63488 00:17:18.633 }, 00:17:18.633 { 00:17:18.633 "name": "BaseBdev3", 00:17:18.633 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:18.633 "is_configured": true, 00:17:18.633 "data_offset": 2048, 00:17:18.633 "data_size": 63488 00:17:18.633 }, 00:17:18.633 { 00:17:18.633 "name": "BaseBdev4", 00:17:18.633 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:18.633 "is_configured": true, 00:17:18.633 "data_offset": 2048, 00:17:18.633 "data_size": 63488 00:17:18.633 } 00:17:18.633 ] 00:17:18.633 }' 00:17:18.633 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.893 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.893 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.893 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.893 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.893 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:18.893 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.893 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.893 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.893 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.893 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:18.893 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.893 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.893 [2024-11-25 15:44:17.405964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:18.893 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.893 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:18.893 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.893 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.893 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.894 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.894 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:18.894 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.894 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.894 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.894 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.894 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.894 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.894 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.894 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.894 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.894 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.894 "name": "raid_bdev1", 00:17:18.894 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:18.894 "strip_size_kb": 64, 00:17:18.894 "state": "online", 00:17:18.894 "raid_level": "raid5f", 00:17:18.894 "superblock": true, 00:17:18.894 "num_base_bdevs": 4, 00:17:18.894 "num_base_bdevs_discovered": 3, 00:17:18.894 "num_base_bdevs_operational": 3, 00:17:18.894 "base_bdevs_list": [ 00:17:18.894 { 00:17:18.894 "name": null, 00:17:18.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.894 "is_configured": false, 00:17:18.894 "data_offset": 0, 00:17:18.894 "data_size": 63488 00:17:18.894 }, 00:17:18.894 { 00:17:18.894 "name": "BaseBdev2", 00:17:18.894 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:18.894 "is_configured": true, 00:17:18.894 "data_offset": 2048, 00:17:18.894 "data_size": 63488 00:17:18.894 }, 00:17:18.894 { 00:17:18.894 "name": "BaseBdev3", 00:17:18.894 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:18.894 "is_configured": true, 00:17:18.894 "data_offset": 2048, 00:17:18.894 "data_size": 63488 00:17:18.894 }, 00:17:18.894 { 00:17:18.894 "name": "BaseBdev4", 00:17:18.894 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:18.894 "is_configured": true, 00:17:18.894 "data_offset": 2048, 00:17:18.894 "data_size": 63488 00:17:18.894 } 00:17:18.894 ] 00:17:18.894 }' 00:17:18.894 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.894 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.464 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:19.464 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.464 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.464 [2024-11-25 15:44:17.853210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.464 [2024-11-25 15:44:17.853433] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:19.464 [2024-11-25 15:44:17.853493] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:19.464 [2024-11-25 15:44:17.853565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.464 [2024-11-25 15:44:17.868094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:19.464 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.464 15:44:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:19.464 [2024-11-25 15:44:17.877000] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:20.405 15:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.405 15:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.405 15:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.405 15:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.405 15:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.405 15:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.405 15:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.405 15:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.405 15:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.405 15:44:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.405 15:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.405 "name": "raid_bdev1", 00:17:20.405 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:20.405 "strip_size_kb": 64, 00:17:20.405 "state": "online", 00:17:20.405 "raid_level": "raid5f", 00:17:20.405 "superblock": true, 00:17:20.405 "num_base_bdevs": 4, 00:17:20.405 "num_base_bdevs_discovered": 4, 00:17:20.405 "num_base_bdevs_operational": 4, 00:17:20.405 "process": { 00:17:20.405 "type": "rebuild", 00:17:20.405 "target": "spare", 00:17:20.405 "progress": { 00:17:20.405 "blocks": 19200, 00:17:20.405 "percent": 10 00:17:20.405 } 00:17:20.405 }, 00:17:20.405 "base_bdevs_list": [ 00:17:20.405 { 00:17:20.405 "name": "spare", 00:17:20.405 "uuid": "aaf1c91b-f8bd-5be3-945d-4796446f6951", 00:17:20.405 "is_configured": true, 00:17:20.405 "data_offset": 2048, 00:17:20.405 "data_size": 63488 00:17:20.405 }, 00:17:20.405 { 00:17:20.405 "name": "BaseBdev2", 00:17:20.405 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:20.405 "is_configured": true, 00:17:20.405 "data_offset": 2048, 00:17:20.405 "data_size": 63488 00:17:20.405 }, 00:17:20.405 { 00:17:20.405 "name": "BaseBdev3", 00:17:20.405 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:20.405 "is_configured": true, 00:17:20.405 "data_offset": 2048, 00:17:20.405 "data_size": 63488 00:17:20.405 }, 00:17:20.405 { 00:17:20.405 "name": "BaseBdev4", 00:17:20.405 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:20.405 "is_configured": true, 00:17:20.405 "data_offset": 2048, 00:17:20.405 "data_size": 63488 00:17:20.405 } 00:17:20.405 ] 00:17:20.405 }' 00:17:20.405 15:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.405 15:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.405 15:44:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.405 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.405 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:20.405 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.405 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.405 [2024-11-25 15:44:19.035792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:20.405 [2024-11-25 15:44:19.082692] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:20.405 [2024-11-25 15:44:19.082757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.405 [2024-11-25 15:44:19.082774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:20.405 [2024-11-25 15:44:19.082784] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:20.666 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.666 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:20.666 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.666 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.666 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.666 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.666 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:20.666 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.666 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.666 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.666 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.666 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.666 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.666 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.666 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.666 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.666 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.666 "name": "raid_bdev1", 00:17:20.666 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:20.666 "strip_size_kb": 64, 00:17:20.666 "state": "online", 00:17:20.666 "raid_level": "raid5f", 00:17:20.666 "superblock": true, 00:17:20.666 "num_base_bdevs": 4, 00:17:20.666 "num_base_bdevs_discovered": 3, 00:17:20.666 "num_base_bdevs_operational": 3, 00:17:20.666 "base_bdevs_list": [ 00:17:20.666 { 00:17:20.666 "name": null, 00:17:20.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.666 "is_configured": false, 00:17:20.666 "data_offset": 0, 00:17:20.666 "data_size": 63488 00:17:20.666 }, 00:17:20.666 { 00:17:20.666 "name": "BaseBdev2", 00:17:20.666 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:20.666 "is_configured": true, 00:17:20.666 "data_offset": 2048, 00:17:20.666 "data_size": 63488 00:17:20.666 }, 00:17:20.666 { 00:17:20.666 "name": "BaseBdev3", 00:17:20.666 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:20.666 "is_configured": true, 00:17:20.666 "data_offset": 2048, 00:17:20.666 "data_size": 63488 00:17:20.666 }, 00:17:20.666 { 00:17:20.666 "name": "BaseBdev4", 00:17:20.666 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:20.666 "is_configured": true, 00:17:20.666 "data_offset": 2048, 00:17:20.666 "data_size": 63488 00:17:20.666 } 00:17:20.666 ] 00:17:20.666 }' 00:17:20.666 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.666 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.926 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:20.926 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.926 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.926 [2024-11-25 15:44:19.561075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:20.926 [2024-11-25 15:44:19.561181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.926 [2024-11-25 15:44:19.561222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:20.926 [2024-11-25 15:44:19.561272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.926 [2024-11-25 15:44:19.561771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.926 [2024-11-25 15:44:19.561839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:20.926 [2024-11-25 15:44:19.561959] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:20.926 [2024-11-25 15:44:19.562002] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:20.926 [2024-11-25 15:44:19.562059] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:20.926 [2024-11-25 15:44:19.562106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:20.926 [2024-11-25 15:44:19.576242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:20.926 spare 00:17:20.926 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.926 15:44:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:20.926 [2024-11-25 15:44:19.585020] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.307 "name": "raid_bdev1", 00:17:22.307 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:22.307 "strip_size_kb": 64, 00:17:22.307 "state": "online", 00:17:22.307 "raid_level": "raid5f", 00:17:22.307 "superblock": true, 00:17:22.307 "num_base_bdevs": 4, 00:17:22.307 "num_base_bdevs_discovered": 4, 00:17:22.307 "num_base_bdevs_operational": 4, 00:17:22.307 "process": { 00:17:22.307 "type": "rebuild", 00:17:22.307 "target": "spare", 00:17:22.307 "progress": { 00:17:22.307 "blocks": 19200, 00:17:22.307 "percent": 10 00:17:22.307 } 00:17:22.307 }, 00:17:22.307 "base_bdevs_list": [ 00:17:22.307 { 00:17:22.307 "name": "spare", 00:17:22.307 "uuid": "aaf1c91b-f8bd-5be3-945d-4796446f6951", 00:17:22.307 "is_configured": true, 00:17:22.307 "data_offset": 2048, 00:17:22.307 "data_size": 63488 00:17:22.307 }, 00:17:22.307 { 00:17:22.307 "name": "BaseBdev2", 00:17:22.307 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:22.307 "is_configured": true, 00:17:22.307 "data_offset": 2048, 00:17:22.307 "data_size": 63488 00:17:22.307 }, 00:17:22.307 { 00:17:22.307 "name": "BaseBdev3", 00:17:22.307 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:22.307 "is_configured": true, 00:17:22.307 "data_offset": 2048, 00:17:22.307 "data_size": 63488 00:17:22.307 }, 00:17:22.307 { 00:17:22.307 "name": "BaseBdev4", 00:17:22.307 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:22.307 "is_configured": true, 00:17:22.307 "data_offset": 2048, 00:17:22.307 "data_size": 63488 00:17:22.307 } 00:17:22.307 ] 00:17:22.307 }' 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.307 [2024-11-25 15:44:20.739677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.307 [2024-11-25 15:44:20.790634] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:22.307 [2024-11-25 15:44:20.790727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.307 [2024-11-25 15:44:20.790764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.307 [2024-11-25 15:44:20.790771] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.307 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.308 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.308 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.308 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:22.308 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.308 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.308 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.308 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.308 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.308 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.308 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.308 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.308 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.308 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.308 "name": "raid_bdev1", 00:17:22.308 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:22.308 "strip_size_kb": 64, 00:17:22.308 "state": "online", 00:17:22.308 "raid_level": "raid5f", 00:17:22.308 "superblock": true, 00:17:22.308 "num_base_bdevs": 4, 00:17:22.308 "num_base_bdevs_discovered": 3, 00:17:22.308 "num_base_bdevs_operational": 3, 00:17:22.308 "base_bdevs_list": [ 00:17:22.308 { 00:17:22.308 "name": null, 00:17:22.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.308 "is_configured": false, 00:17:22.308 "data_offset": 0, 00:17:22.308 "data_size": 63488 00:17:22.308 }, 00:17:22.308 { 00:17:22.308 "name": "BaseBdev2", 00:17:22.308 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:22.308 "is_configured": true, 00:17:22.308 "data_offset": 2048, 00:17:22.308 "data_size": 63488 00:17:22.308 }, 00:17:22.308 { 00:17:22.308 "name": "BaseBdev3", 00:17:22.308 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:22.308 "is_configured": true, 00:17:22.308 "data_offset": 2048, 00:17:22.308 "data_size": 63488 00:17:22.308 }, 00:17:22.308 { 00:17:22.308 "name": "BaseBdev4", 00:17:22.308 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:22.308 "is_configured": true, 00:17:22.308 "data_offset": 2048, 00:17:22.308 "data_size": 63488 00:17:22.308 } 00:17:22.308 ] 00:17:22.308 }' 00:17:22.308 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.308 15:44:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.877 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.878 "name": "raid_bdev1", 00:17:22.878 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:22.878 "strip_size_kb": 64, 00:17:22.878 "state": "online", 00:17:22.878 "raid_level": "raid5f", 00:17:22.878 "superblock": true, 00:17:22.878 "num_base_bdevs": 4, 00:17:22.878 "num_base_bdevs_discovered": 3, 00:17:22.878 "num_base_bdevs_operational": 3, 00:17:22.878 "base_bdevs_list": [ 00:17:22.878 { 00:17:22.878 "name": null, 00:17:22.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.878 "is_configured": false, 00:17:22.878 "data_offset": 0, 00:17:22.878 "data_size": 63488 00:17:22.878 }, 00:17:22.878 { 00:17:22.878 "name": "BaseBdev2", 00:17:22.878 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:22.878 "is_configured": true, 00:17:22.878 "data_offset": 2048, 00:17:22.878 "data_size": 63488 00:17:22.878 }, 00:17:22.878 { 00:17:22.878 "name": "BaseBdev3", 00:17:22.878 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:22.878 "is_configured": true, 00:17:22.878 "data_offset": 2048, 00:17:22.878 "data_size": 63488 00:17:22.878 }, 00:17:22.878 { 00:17:22.878 "name": "BaseBdev4", 00:17:22.878 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:22.878 "is_configured": true, 00:17:22.878 "data_offset": 2048, 00:17:22.878 "data_size": 63488 00:17:22.878 } 00:17:22.878 ] 00:17:22.878 }' 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.878 [2024-11-25 15:44:21.410256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:22.878 [2024-11-25 15:44:21.410307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.878 [2024-11-25 15:44:21.410344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:22.878 [2024-11-25 15:44:21.410353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.878 [2024-11-25 15:44:21.410789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.878 [2024-11-25 15:44:21.410806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:22.878 [2024-11-25 15:44:21.410879] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:22.878 [2024-11-25 15:44:21.410894] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:22.878 [2024-11-25 15:44:21.410905] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:22.878 [2024-11-25 15:44:21.410916] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:22.878 BaseBdev1 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.878 15:44:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:23.824 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:23.824 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.824 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.824 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.824 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.824 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:23.824 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.824 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.824 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.824 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.824 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.824 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.824 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.824 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.824 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.824 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.824 "name": "raid_bdev1", 00:17:23.824 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:23.824 "strip_size_kb": 64, 00:17:23.824 "state": "online", 00:17:23.824 "raid_level": "raid5f", 00:17:23.824 "superblock": true, 00:17:23.824 "num_base_bdevs": 4, 00:17:23.824 "num_base_bdevs_discovered": 3, 00:17:23.824 "num_base_bdevs_operational": 3, 00:17:23.824 "base_bdevs_list": [ 00:17:23.824 { 00:17:23.824 "name": null, 00:17:23.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.824 "is_configured": false, 00:17:23.824 "data_offset": 0, 00:17:23.824 "data_size": 63488 00:17:23.824 }, 00:17:23.824 { 00:17:23.824 "name": "BaseBdev2", 00:17:23.824 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:23.824 "is_configured": true, 00:17:23.824 "data_offset": 2048, 00:17:23.824 "data_size": 63488 00:17:23.824 }, 00:17:23.824 { 00:17:23.824 "name": "BaseBdev3", 00:17:23.824 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:23.824 "is_configured": true, 00:17:23.824 "data_offset": 2048, 00:17:23.824 "data_size": 63488 00:17:23.824 }, 00:17:23.824 { 00:17:23.824 "name": "BaseBdev4", 00:17:23.824 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:23.824 "is_configured": true, 00:17:23.824 "data_offset": 2048, 00:17:23.824 "data_size": 63488 00:17:23.824 } 00:17:23.824 ] 00:17:23.824 }' 00:17:23.824 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.824 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.395 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:24.395 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.395 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:24.395 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:24.395 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.395 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.395 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.395 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.395 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.395 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.395 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.395 "name": "raid_bdev1", 00:17:24.395 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:24.395 "strip_size_kb": 64, 00:17:24.395 "state": "online", 00:17:24.395 "raid_level": "raid5f", 00:17:24.395 "superblock": true, 00:17:24.395 "num_base_bdevs": 4, 00:17:24.395 "num_base_bdevs_discovered": 3, 00:17:24.395 "num_base_bdevs_operational": 3, 00:17:24.395 "base_bdevs_list": [ 00:17:24.395 { 00:17:24.395 "name": null, 00:17:24.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.395 "is_configured": false, 00:17:24.395 "data_offset": 0, 00:17:24.395 "data_size": 63488 00:17:24.395 }, 00:17:24.395 { 00:17:24.395 "name": "BaseBdev2", 00:17:24.395 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:24.395 "is_configured": true, 00:17:24.395 "data_offset": 2048, 00:17:24.395 "data_size": 63488 00:17:24.395 }, 00:17:24.395 { 00:17:24.395 "name": "BaseBdev3", 00:17:24.395 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:24.395 "is_configured": true, 00:17:24.395 "data_offset": 2048, 00:17:24.395 "data_size": 63488 00:17:24.395 }, 00:17:24.395 { 00:17:24.395 "name": "BaseBdev4", 00:17:24.395 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:24.395 "is_configured": true, 00:17:24.395 "data_offset": 2048, 00:17:24.395 "data_size": 63488 00:17:24.395 } 00:17:24.395 ] 00:17:24.395 }' 00:17:24.395 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.395 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:24.395 15:44:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.395 15:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:24.395 15:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:24.395 15:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:24.395 15:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:24.395 15:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:24.395 15:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.395 15:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:24.395 15:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.395 15:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:24.395 15:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.395 15:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.395 [2024-11-25 15:44:23.027607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.395 [2024-11-25 15:44:23.027809] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:24.395 [2024-11-25 15:44:23.027869] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:24.395 request: 00:17:24.395 { 00:17:24.395 "base_bdev": "BaseBdev1", 00:17:24.395 "raid_bdev": "raid_bdev1", 00:17:24.395 "method": "bdev_raid_add_base_bdev", 00:17:24.395 "req_id": 1 00:17:24.395 } 00:17:24.395 Got JSON-RPC error response 00:17:24.395 response: 00:17:24.395 { 00:17:24.395 "code": -22, 00:17:24.395 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:24.395 } 00:17:24.395 15:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:24.395 15:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:24.395 15:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.395 15:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.395 15:44:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.395 15:44:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:25.778 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:25.778 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.778 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.778 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.778 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.778 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:25.778 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.778 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.778 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.778 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.778 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.778 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.779 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.779 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.779 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.779 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.779 "name": "raid_bdev1", 00:17:25.779 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:25.779 "strip_size_kb": 64, 00:17:25.779 "state": "online", 00:17:25.779 "raid_level": "raid5f", 00:17:25.779 "superblock": true, 00:17:25.779 "num_base_bdevs": 4, 00:17:25.779 "num_base_bdevs_discovered": 3, 00:17:25.779 "num_base_bdevs_operational": 3, 00:17:25.779 "base_bdevs_list": [ 00:17:25.779 { 00:17:25.779 "name": null, 00:17:25.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.779 "is_configured": false, 00:17:25.779 "data_offset": 0, 00:17:25.779 "data_size": 63488 00:17:25.779 }, 00:17:25.779 { 00:17:25.779 "name": "BaseBdev2", 00:17:25.779 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:25.779 "is_configured": true, 00:17:25.779 "data_offset": 2048, 00:17:25.779 "data_size": 63488 00:17:25.779 }, 00:17:25.779 { 00:17:25.779 "name": "BaseBdev3", 00:17:25.779 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:25.779 "is_configured": true, 00:17:25.779 "data_offset": 2048, 00:17:25.779 "data_size": 63488 00:17:25.779 }, 00:17:25.779 { 00:17:25.779 "name": "BaseBdev4", 00:17:25.779 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:25.779 "is_configured": true, 00:17:25.779 "data_offset": 2048, 00:17:25.779 "data_size": 63488 00:17:25.779 } 00:17:25.779 ] 00:17:25.779 }' 00:17:25.779 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.779 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.040 "name": "raid_bdev1", 00:17:26.040 "uuid": "2f4cbc1d-1597-4911-ab7f-d011e7128afc", 00:17:26.040 "strip_size_kb": 64, 00:17:26.040 "state": "online", 00:17:26.040 "raid_level": "raid5f", 00:17:26.040 "superblock": true, 00:17:26.040 "num_base_bdevs": 4, 00:17:26.040 "num_base_bdevs_discovered": 3, 00:17:26.040 "num_base_bdevs_operational": 3, 00:17:26.040 "base_bdevs_list": [ 00:17:26.040 { 00:17:26.040 "name": null, 00:17:26.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.040 "is_configured": false, 00:17:26.040 "data_offset": 0, 00:17:26.040 "data_size": 63488 00:17:26.040 }, 00:17:26.040 { 00:17:26.040 "name": "BaseBdev2", 00:17:26.040 "uuid": "9c8a6239-db6b-58d1-949f-f4bd03d84fb4", 00:17:26.040 "is_configured": true, 00:17:26.040 "data_offset": 2048, 00:17:26.040 "data_size": 63488 00:17:26.040 }, 00:17:26.040 { 00:17:26.040 "name": "BaseBdev3", 00:17:26.040 "uuid": "5ba25e7e-cfd2-507a-93f0-aed7022b2eee", 00:17:26.040 "is_configured": true, 00:17:26.040 "data_offset": 2048, 00:17:26.040 "data_size": 63488 00:17:26.040 }, 00:17:26.040 { 00:17:26.040 "name": "BaseBdev4", 00:17:26.040 "uuid": "38df23c9-6d87-5940-9d8a-9c38a9bba6de", 00:17:26.040 "is_configured": true, 00:17:26.040 "data_offset": 2048, 00:17:26.040 "data_size": 63488 00:17:26.040 } 00:17:26.040 ] 00:17:26.040 }' 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84726 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84726 ']' 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 84726 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84726 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.040 killing process with pid 84726 00:17:26.040 Received shutdown signal, test time was about 60.000000 seconds 00:17:26.040 00:17:26.040 Latency(us) 00:17:26.040 [2024-11-25T15:44:24.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.040 [2024-11-25T15:44:24.721Z] =================================================================================================================== 00:17:26.040 [2024-11-25T15:44:24.721Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84726' 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 84726 00:17:26.040 [2024-11-25 15:44:24.698518] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:26.040 [2024-11-25 15:44:24.698626] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.040 [2024-11-25 15:44:24.698693] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:26.040 15:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 84726 00:17:26.040 [2024-11-25 15:44:24.698703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:26.611 [2024-11-25 15:44:25.154614] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:27.551 15:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:27.551 00:17:27.551 real 0m26.599s 00:17:27.551 user 0m33.516s 00:17:27.551 sys 0m2.836s 00:17:27.551 15:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.551 ************************************ 00:17:27.551 END TEST raid5f_rebuild_test_sb 00:17:27.551 ************************************ 00:17:27.551 15:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.551 15:44:26 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:27.551 15:44:26 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:27.551 15:44:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:27.551 15:44:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.551 15:44:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:27.812 ************************************ 00:17:27.812 START TEST raid_state_function_test_sb_4k 00:17:27.812 ************************************ 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85533 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:27.812 Process raid pid: 85533 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85533' 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85533 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85533 ']' 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.812 15:44:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.812 [2024-11-25 15:44:26.349392] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:17:27.812 [2024-11-25 15:44:26.349590] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.072 [2024-11-25 15:44:26.528760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.072 [2024-11-25 15:44:26.630373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.333 [2024-11-25 15:44:26.795324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.333 [2024-11-25 15:44:26.795360] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.594 [2024-11-25 15:44:27.166428] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:28.594 [2024-11-25 15:44:27.166533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:28.594 [2024-11-25 15:44:27.166547] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:28.594 [2024-11-25 15:44:27.166557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.594 "name": "Existed_Raid", 00:17:28.594 "uuid": "0dde5d94-6b41-469e-9a02-eea33e8c5190", 00:17:28.594 "strip_size_kb": 0, 00:17:28.594 "state": "configuring", 00:17:28.594 "raid_level": "raid1", 00:17:28.594 "superblock": true, 00:17:28.594 "num_base_bdevs": 2, 00:17:28.594 "num_base_bdevs_discovered": 0, 00:17:28.594 "num_base_bdevs_operational": 2, 00:17:28.594 "base_bdevs_list": [ 00:17:28.594 { 00:17:28.594 "name": "BaseBdev1", 00:17:28.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.594 "is_configured": false, 00:17:28.594 "data_offset": 0, 00:17:28.594 "data_size": 0 00:17:28.594 }, 00:17:28.594 { 00:17:28.594 "name": "BaseBdev2", 00:17:28.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.594 "is_configured": false, 00:17:28.594 "data_offset": 0, 00:17:28.594 "data_size": 0 00:17:28.594 } 00:17:28.594 ] 00:17:28.594 }' 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.594 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.164 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:29.164 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.164 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.164 [2024-11-25 15:44:27.645525] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:29.164 [2024-11-25 15:44:27.645598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:29.164 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.164 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:29.164 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.164 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.164 [2024-11-25 15:44:27.653512] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:29.164 [2024-11-25 15:44:27.653586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:29.164 [2024-11-25 15:44:27.653615] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:29.164 [2024-11-25 15:44:27.653655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.165 [2024-11-25 15:44:27.691594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:29.165 BaseBdev1 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.165 [ 00:17:29.165 { 00:17:29.165 "name": "BaseBdev1", 00:17:29.165 "aliases": [ 00:17:29.165 "e3543672-8551-4b48-8963-9a8086a778fc" 00:17:29.165 ], 00:17:29.165 "product_name": "Malloc disk", 00:17:29.165 "block_size": 4096, 00:17:29.165 "num_blocks": 8192, 00:17:29.165 "uuid": "e3543672-8551-4b48-8963-9a8086a778fc", 00:17:29.165 "assigned_rate_limits": { 00:17:29.165 "rw_ios_per_sec": 0, 00:17:29.165 "rw_mbytes_per_sec": 0, 00:17:29.165 "r_mbytes_per_sec": 0, 00:17:29.165 "w_mbytes_per_sec": 0 00:17:29.165 }, 00:17:29.165 "claimed": true, 00:17:29.165 "claim_type": "exclusive_write", 00:17:29.165 "zoned": false, 00:17:29.165 "supported_io_types": { 00:17:29.165 "read": true, 00:17:29.165 "write": true, 00:17:29.165 "unmap": true, 00:17:29.165 "flush": true, 00:17:29.165 "reset": true, 00:17:29.165 "nvme_admin": false, 00:17:29.165 "nvme_io": false, 00:17:29.165 "nvme_io_md": false, 00:17:29.165 "write_zeroes": true, 00:17:29.165 "zcopy": true, 00:17:29.165 "get_zone_info": false, 00:17:29.165 "zone_management": false, 00:17:29.165 "zone_append": false, 00:17:29.165 "compare": false, 00:17:29.165 "compare_and_write": false, 00:17:29.165 "abort": true, 00:17:29.165 "seek_hole": false, 00:17:29.165 "seek_data": false, 00:17:29.165 "copy": true, 00:17:29.165 "nvme_iov_md": false 00:17:29.165 }, 00:17:29.165 "memory_domains": [ 00:17:29.165 { 00:17:29.165 "dma_device_id": "system", 00:17:29.165 "dma_device_type": 1 00:17:29.165 }, 00:17:29.165 { 00:17:29.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.165 "dma_device_type": 2 00:17:29.165 } 00:17:29.165 ], 00:17:29.165 "driver_specific": {} 00:17:29.165 } 00:17:29.165 ] 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.165 "name": "Existed_Raid", 00:17:29.165 "uuid": "816791dc-a47f-4ebd-8f38-aec7635db7b7", 00:17:29.165 "strip_size_kb": 0, 00:17:29.165 "state": "configuring", 00:17:29.165 "raid_level": "raid1", 00:17:29.165 "superblock": true, 00:17:29.165 "num_base_bdevs": 2, 00:17:29.165 "num_base_bdevs_discovered": 1, 00:17:29.165 "num_base_bdevs_operational": 2, 00:17:29.165 "base_bdevs_list": [ 00:17:29.165 { 00:17:29.165 "name": "BaseBdev1", 00:17:29.165 "uuid": "e3543672-8551-4b48-8963-9a8086a778fc", 00:17:29.165 "is_configured": true, 00:17:29.165 "data_offset": 256, 00:17:29.165 "data_size": 7936 00:17:29.165 }, 00:17:29.165 { 00:17:29.165 "name": "BaseBdev2", 00:17:29.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.165 "is_configured": false, 00:17:29.165 "data_offset": 0, 00:17:29.165 "data_size": 0 00:17:29.165 } 00:17:29.165 ] 00:17:29.165 }' 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.165 15:44:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.735 [2024-11-25 15:44:28.214778] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:29.735 [2024-11-25 15:44:28.214818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.735 [2024-11-25 15:44:28.226801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:29.735 [2024-11-25 15:44:28.228610] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:29.735 [2024-11-25 15:44:28.228652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.735 "name": "Existed_Raid", 00:17:29.735 "uuid": "ecbd2fc4-b563-437f-8147-84f3622eab1d", 00:17:29.735 "strip_size_kb": 0, 00:17:29.735 "state": "configuring", 00:17:29.735 "raid_level": "raid1", 00:17:29.735 "superblock": true, 00:17:29.735 "num_base_bdevs": 2, 00:17:29.735 "num_base_bdevs_discovered": 1, 00:17:29.735 "num_base_bdevs_operational": 2, 00:17:29.735 "base_bdevs_list": [ 00:17:29.735 { 00:17:29.735 "name": "BaseBdev1", 00:17:29.735 "uuid": "e3543672-8551-4b48-8963-9a8086a778fc", 00:17:29.735 "is_configured": true, 00:17:29.735 "data_offset": 256, 00:17:29.735 "data_size": 7936 00:17:29.735 }, 00:17:29.735 { 00:17:29.735 "name": "BaseBdev2", 00:17:29.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.735 "is_configured": false, 00:17:29.735 "data_offset": 0, 00:17:29.735 "data_size": 0 00:17:29.735 } 00:17:29.735 ] 00:17:29.735 }' 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.735 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.996 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:29.996 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.996 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.256 [2024-11-25 15:44:28.690213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:30.256 [2024-11-25 15:44:28.690534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:30.256 [2024-11-25 15:44:28.690598] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:30.256 [2024-11-25 15:44:28.690869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:30.256 [2024-11-25 15:44:28.691072] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:30.256 [2024-11-25 15:44:28.691119] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:30.256 BaseBdev2 00:17:30.256 [2024-11-25 15:44:28.691295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.256 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.256 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:30.256 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:30.256 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:30.256 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:30.256 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:30.256 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:30.256 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:30.256 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.256 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.256 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.256 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:30.256 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.256 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.256 [ 00:17:30.256 { 00:17:30.256 "name": "BaseBdev2", 00:17:30.256 "aliases": [ 00:17:30.256 "6347e68e-b8f0-41ab-b062-b67deb1d2254" 00:17:30.256 ], 00:17:30.256 "product_name": "Malloc disk", 00:17:30.256 "block_size": 4096, 00:17:30.256 "num_blocks": 8192, 00:17:30.256 "uuid": "6347e68e-b8f0-41ab-b062-b67deb1d2254", 00:17:30.256 "assigned_rate_limits": { 00:17:30.256 "rw_ios_per_sec": 0, 00:17:30.256 "rw_mbytes_per_sec": 0, 00:17:30.256 "r_mbytes_per_sec": 0, 00:17:30.256 "w_mbytes_per_sec": 0 00:17:30.256 }, 00:17:30.256 "claimed": true, 00:17:30.256 "claim_type": "exclusive_write", 00:17:30.256 "zoned": false, 00:17:30.256 "supported_io_types": { 00:17:30.256 "read": true, 00:17:30.256 "write": true, 00:17:30.256 "unmap": true, 00:17:30.256 "flush": true, 00:17:30.256 "reset": true, 00:17:30.256 "nvme_admin": false, 00:17:30.256 "nvme_io": false, 00:17:30.256 "nvme_io_md": false, 00:17:30.256 "write_zeroes": true, 00:17:30.256 "zcopy": true, 00:17:30.256 "get_zone_info": false, 00:17:30.256 "zone_management": false, 00:17:30.256 "zone_append": false, 00:17:30.256 "compare": false, 00:17:30.256 "compare_and_write": false, 00:17:30.256 "abort": true, 00:17:30.256 "seek_hole": false, 00:17:30.256 "seek_data": false, 00:17:30.256 "copy": true, 00:17:30.256 "nvme_iov_md": false 00:17:30.256 }, 00:17:30.256 "memory_domains": [ 00:17:30.256 { 00:17:30.256 "dma_device_id": "system", 00:17:30.256 "dma_device_type": 1 00:17:30.256 }, 00:17:30.256 { 00:17:30.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.256 "dma_device_type": 2 00:17:30.256 } 00:17:30.256 ], 00:17:30.256 "driver_specific": {} 00:17:30.256 } 00:17:30.256 ] 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.257 "name": "Existed_Raid", 00:17:30.257 "uuid": "ecbd2fc4-b563-437f-8147-84f3622eab1d", 00:17:30.257 "strip_size_kb": 0, 00:17:30.257 "state": "online", 00:17:30.257 "raid_level": "raid1", 00:17:30.257 "superblock": true, 00:17:30.257 "num_base_bdevs": 2, 00:17:30.257 "num_base_bdevs_discovered": 2, 00:17:30.257 "num_base_bdevs_operational": 2, 00:17:30.257 "base_bdevs_list": [ 00:17:30.257 { 00:17:30.257 "name": "BaseBdev1", 00:17:30.257 "uuid": "e3543672-8551-4b48-8963-9a8086a778fc", 00:17:30.257 "is_configured": true, 00:17:30.257 "data_offset": 256, 00:17:30.257 "data_size": 7936 00:17:30.257 }, 00:17:30.257 { 00:17:30.257 "name": "BaseBdev2", 00:17:30.257 "uuid": "6347e68e-b8f0-41ab-b062-b67deb1d2254", 00:17:30.257 "is_configured": true, 00:17:30.257 "data_offset": 256, 00:17:30.257 "data_size": 7936 00:17:30.257 } 00:17:30.257 ] 00:17:30.257 }' 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.257 15:44:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.827 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:30.827 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:30.827 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:30.827 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:30.827 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:30.827 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:30.827 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:30.827 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:30.827 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.828 [2024-11-25 15:44:29.233527] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:30.828 "name": "Existed_Raid", 00:17:30.828 "aliases": [ 00:17:30.828 "ecbd2fc4-b563-437f-8147-84f3622eab1d" 00:17:30.828 ], 00:17:30.828 "product_name": "Raid Volume", 00:17:30.828 "block_size": 4096, 00:17:30.828 "num_blocks": 7936, 00:17:30.828 "uuid": "ecbd2fc4-b563-437f-8147-84f3622eab1d", 00:17:30.828 "assigned_rate_limits": { 00:17:30.828 "rw_ios_per_sec": 0, 00:17:30.828 "rw_mbytes_per_sec": 0, 00:17:30.828 "r_mbytes_per_sec": 0, 00:17:30.828 "w_mbytes_per_sec": 0 00:17:30.828 }, 00:17:30.828 "claimed": false, 00:17:30.828 "zoned": false, 00:17:30.828 "supported_io_types": { 00:17:30.828 "read": true, 00:17:30.828 "write": true, 00:17:30.828 "unmap": false, 00:17:30.828 "flush": false, 00:17:30.828 "reset": true, 00:17:30.828 "nvme_admin": false, 00:17:30.828 "nvme_io": false, 00:17:30.828 "nvme_io_md": false, 00:17:30.828 "write_zeroes": true, 00:17:30.828 "zcopy": false, 00:17:30.828 "get_zone_info": false, 00:17:30.828 "zone_management": false, 00:17:30.828 "zone_append": false, 00:17:30.828 "compare": false, 00:17:30.828 "compare_and_write": false, 00:17:30.828 "abort": false, 00:17:30.828 "seek_hole": false, 00:17:30.828 "seek_data": false, 00:17:30.828 "copy": false, 00:17:30.828 "nvme_iov_md": false 00:17:30.828 }, 00:17:30.828 "memory_domains": [ 00:17:30.828 { 00:17:30.828 "dma_device_id": "system", 00:17:30.828 "dma_device_type": 1 00:17:30.828 }, 00:17:30.828 { 00:17:30.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.828 "dma_device_type": 2 00:17:30.828 }, 00:17:30.828 { 00:17:30.828 "dma_device_id": "system", 00:17:30.828 "dma_device_type": 1 00:17:30.828 }, 00:17:30.828 { 00:17:30.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.828 "dma_device_type": 2 00:17:30.828 } 00:17:30.828 ], 00:17:30.828 "driver_specific": { 00:17:30.828 "raid": { 00:17:30.828 "uuid": "ecbd2fc4-b563-437f-8147-84f3622eab1d", 00:17:30.828 "strip_size_kb": 0, 00:17:30.828 "state": "online", 00:17:30.828 "raid_level": "raid1", 00:17:30.828 "superblock": true, 00:17:30.828 "num_base_bdevs": 2, 00:17:30.828 "num_base_bdevs_discovered": 2, 00:17:30.828 "num_base_bdevs_operational": 2, 00:17:30.828 "base_bdevs_list": [ 00:17:30.828 { 00:17:30.828 "name": "BaseBdev1", 00:17:30.828 "uuid": "e3543672-8551-4b48-8963-9a8086a778fc", 00:17:30.828 "is_configured": true, 00:17:30.828 "data_offset": 256, 00:17:30.828 "data_size": 7936 00:17:30.828 }, 00:17:30.828 { 00:17:30.828 "name": "BaseBdev2", 00:17:30.828 "uuid": "6347e68e-b8f0-41ab-b062-b67deb1d2254", 00:17:30.828 "is_configured": true, 00:17:30.828 "data_offset": 256, 00:17:30.828 "data_size": 7936 00:17:30.828 } 00:17:30.828 ] 00:17:30.828 } 00:17:30.828 } 00:17:30.828 }' 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:30.828 BaseBdev2' 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.828 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.828 [2024-11-25 15:44:29.448973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.089 "name": "Existed_Raid", 00:17:31.089 "uuid": "ecbd2fc4-b563-437f-8147-84f3622eab1d", 00:17:31.089 "strip_size_kb": 0, 00:17:31.089 "state": "online", 00:17:31.089 "raid_level": "raid1", 00:17:31.089 "superblock": true, 00:17:31.089 "num_base_bdevs": 2, 00:17:31.089 "num_base_bdevs_discovered": 1, 00:17:31.089 "num_base_bdevs_operational": 1, 00:17:31.089 "base_bdevs_list": [ 00:17:31.089 { 00:17:31.089 "name": null, 00:17:31.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.089 "is_configured": false, 00:17:31.089 "data_offset": 0, 00:17:31.089 "data_size": 7936 00:17:31.089 }, 00:17:31.089 { 00:17:31.089 "name": "BaseBdev2", 00:17:31.089 "uuid": "6347e68e-b8f0-41ab-b062-b67deb1d2254", 00:17:31.089 "is_configured": true, 00:17:31.089 "data_offset": 256, 00:17:31.089 "data_size": 7936 00:17:31.089 } 00:17:31.089 ] 00:17:31.089 }' 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.089 15:44:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.349 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:31.349 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:31.349 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:31.349 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.349 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.349 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.349 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.609 [2024-11-25 15:44:30.040685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:31.609 [2024-11-25 15:44:30.040844] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.609 [2024-11-25 15:44:30.130282] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.609 [2024-11-25 15:44:30.130336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.609 [2024-11-25 15:44:30.130347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85533 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85533 ']' 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85533 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.609 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85533 00:17:31.610 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.610 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.610 killing process with pid 85533 00:17:31.610 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85533' 00:17:31.610 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85533 00:17:31.610 [2024-11-25 15:44:30.213420] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:31.610 15:44:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85533 00:17:31.610 [2024-11-25 15:44:30.229897] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:32.993 15:44:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:32.993 00:17:32.993 real 0m5.018s 00:17:32.993 user 0m7.263s 00:17:32.993 sys 0m0.911s 00:17:32.993 ************************************ 00:17:32.993 END TEST raid_state_function_test_sb_4k 00:17:32.993 ************************************ 00:17:32.993 15:44:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.993 15:44:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.993 15:44:31 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:32.993 15:44:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:32.993 15:44:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.993 15:44:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:32.993 ************************************ 00:17:32.993 START TEST raid_superblock_test_4k 00:17:32.993 ************************************ 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=85780 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 85780 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 85780 ']' 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.993 15:44:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.993 [2024-11-25 15:44:31.443270] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:17:32.993 [2024-11-25 15:44:31.443388] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85780 ] 00:17:32.993 [2024-11-25 15:44:31.616355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.254 [2024-11-25 15:44:31.722420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.254 [2024-11-25 15:44:31.914502] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.254 [2024-11-25 15:44:31.914551] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.825 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.826 malloc1 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.826 [2024-11-25 15:44:32.267874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:33.826 [2024-11-25 15:44:32.267998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.826 [2024-11-25 15:44:32.268058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:33.826 [2024-11-25 15:44:32.268090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.826 [2024-11-25 15:44:32.270129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.826 [2024-11-25 15:44:32.270196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:33.826 pt1 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.826 malloc2 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.826 [2024-11-25 15:44:32.321417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:33.826 [2024-11-25 15:44:32.321465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.826 [2024-11-25 15:44:32.321500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:33.826 [2024-11-25 15:44:32.321509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.826 [2024-11-25 15:44:32.323523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.826 [2024-11-25 15:44:32.323560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:33.826 pt2 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.826 [2024-11-25 15:44:32.333446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:33.826 [2024-11-25 15:44:32.335065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:33.826 [2024-11-25 15:44:32.335225] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:33.826 [2024-11-25 15:44:32.335241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:33.826 [2024-11-25 15:44:32.335475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:33.826 [2024-11-25 15:44:32.335650] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:33.826 [2024-11-25 15:44:32.335664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:33.826 [2024-11-25 15:44:32.335797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.826 "name": "raid_bdev1", 00:17:33.826 "uuid": "aab5d7e1-ecba-4fa7-8f54-5fff31bd3159", 00:17:33.826 "strip_size_kb": 0, 00:17:33.826 "state": "online", 00:17:33.826 "raid_level": "raid1", 00:17:33.826 "superblock": true, 00:17:33.826 "num_base_bdevs": 2, 00:17:33.826 "num_base_bdevs_discovered": 2, 00:17:33.826 "num_base_bdevs_operational": 2, 00:17:33.826 "base_bdevs_list": [ 00:17:33.826 { 00:17:33.826 "name": "pt1", 00:17:33.826 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:33.826 "is_configured": true, 00:17:33.826 "data_offset": 256, 00:17:33.826 "data_size": 7936 00:17:33.826 }, 00:17:33.826 { 00:17:33.826 "name": "pt2", 00:17:33.826 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.826 "is_configured": true, 00:17:33.826 "data_offset": 256, 00:17:33.826 "data_size": 7936 00:17:33.826 } 00:17:33.826 ] 00:17:33.826 }' 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.826 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.396 [2024-11-25 15:44:32.800872] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:34.396 "name": "raid_bdev1", 00:17:34.396 "aliases": [ 00:17:34.396 "aab5d7e1-ecba-4fa7-8f54-5fff31bd3159" 00:17:34.396 ], 00:17:34.396 "product_name": "Raid Volume", 00:17:34.396 "block_size": 4096, 00:17:34.396 "num_blocks": 7936, 00:17:34.396 "uuid": "aab5d7e1-ecba-4fa7-8f54-5fff31bd3159", 00:17:34.396 "assigned_rate_limits": { 00:17:34.396 "rw_ios_per_sec": 0, 00:17:34.396 "rw_mbytes_per_sec": 0, 00:17:34.396 "r_mbytes_per_sec": 0, 00:17:34.396 "w_mbytes_per_sec": 0 00:17:34.396 }, 00:17:34.396 "claimed": false, 00:17:34.396 "zoned": false, 00:17:34.396 "supported_io_types": { 00:17:34.396 "read": true, 00:17:34.396 "write": true, 00:17:34.396 "unmap": false, 00:17:34.396 "flush": false, 00:17:34.396 "reset": true, 00:17:34.396 "nvme_admin": false, 00:17:34.396 "nvme_io": false, 00:17:34.396 "nvme_io_md": false, 00:17:34.396 "write_zeroes": true, 00:17:34.396 "zcopy": false, 00:17:34.396 "get_zone_info": false, 00:17:34.396 "zone_management": false, 00:17:34.396 "zone_append": false, 00:17:34.396 "compare": false, 00:17:34.396 "compare_and_write": false, 00:17:34.396 "abort": false, 00:17:34.396 "seek_hole": false, 00:17:34.396 "seek_data": false, 00:17:34.396 "copy": false, 00:17:34.396 "nvme_iov_md": false 00:17:34.396 }, 00:17:34.396 "memory_domains": [ 00:17:34.396 { 00:17:34.396 "dma_device_id": "system", 00:17:34.396 "dma_device_type": 1 00:17:34.396 }, 00:17:34.396 { 00:17:34.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.396 "dma_device_type": 2 00:17:34.396 }, 00:17:34.396 { 00:17:34.396 "dma_device_id": "system", 00:17:34.396 "dma_device_type": 1 00:17:34.396 }, 00:17:34.396 { 00:17:34.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.396 "dma_device_type": 2 00:17:34.396 } 00:17:34.396 ], 00:17:34.396 "driver_specific": { 00:17:34.396 "raid": { 00:17:34.396 "uuid": "aab5d7e1-ecba-4fa7-8f54-5fff31bd3159", 00:17:34.396 "strip_size_kb": 0, 00:17:34.396 "state": "online", 00:17:34.396 "raid_level": "raid1", 00:17:34.396 "superblock": true, 00:17:34.396 "num_base_bdevs": 2, 00:17:34.396 "num_base_bdevs_discovered": 2, 00:17:34.396 "num_base_bdevs_operational": 2, 00:17:34.396 "base_bdevs_list": [ 00:17:34.396 { 00:17:34.396 "name": "pt1", 00:17:34.396 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.396 "is_configured": true, 00:17:34.396 "data_offset": 256, 00:17:34.396 "data_size": 7936 00:17:34.396 }, 00:17:34.396 { 00:17:34.396 "name": "pt2", 00:17:34.396 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.396 "is_configured": true, 00:17:34.396 "data_offset": 256, 00:17:34.396 "data_size": 7936 00:17:34.396 } 00:17:34.396 ] 00:17:34.396 } 00:17:34.396 } 00:17:34.396 }' 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:34.396 pt2' 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.396 15:44:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.397 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:34.397 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:34.397 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:34.397 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:34.397 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.397 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.397 [2024-11-25 15:44:33.028437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.397 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.397 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=aab5d7e1-ecba-4fa7-8f54-5fff31bd3159 00:17:34.397 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z aab5d7e1-ecba-4fa7-8f54-5fff31bd3159 ']' 00:17:34.397 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:34.397 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.397 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.397 [2024-11-25 15:44:33.072110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.397 [2024-11-25 15:44:33.072130] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:34.397 [2024-11-25 15:44:33.072196] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.397 [2024-11-25 15:44:33.072246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.397 [2024-11-25 15:44:33.072259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.656 [2024-11-25 15:44:33.203912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:34.656 [2024-11-25 15:44:33.205698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:34.656 [2024-11-25 15:44:33.205756] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:34.656 [2024-11-25 15:44:33.205804] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:34.656 [2024-11-25 15:44:33.205817] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.656 [2024-11-25 15:44:33.205827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:34.656 request: 00:17:34.656 { 00:17:34.656 "name": "raid_bdev1", 00:17:34.656 "raid_level": "raid1", 00:17:34.656 "base_bdevs": [ 00:17:34.656 "malloc1", 00:17:34.656 "malloc2" 00:17:34.656 ], 00:17:34.656 "superblock": false, 00:17:34.656 "method": "bdev_raid_create", 00:17:34.656 "req_id": 1 00:17:34.656 } 00:17:34.656 Got JSON-RPC error response 00:17:34.656 response: 00:17:34.656 { 00:17:34.656 "code": -17, 00:17:34.656 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:34.656 } 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:34.656 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.657 [2024-11-25 15:44:33.271782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:34.657 [2024-11-25 15:44:33.271870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.657 [2024-11-25 15:44:33.271918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:34.657 [2024-11-25 15:44:33.271970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.657 [2024-11-25 15:44:33.274008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.657 [2024-11-25 15:44:33.274087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:34.657 [2024-11-25 15:44:33.274194] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:34.657 [2024-11-25 15:44:33.274275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:34.657 pt1 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.657 "name": "raid_bdev1", 00:17:34.657 "uuid": "aab5d7e1-ecba-4fa7-8f54-5fff31bd3159", 00:17:34.657 "strip_size_kb": 0, 00:17:34.657 "state": "configuring", 00:17:34.657 "raid_level": "raid1", 00:17:34.657 "superblock": true, 00:17:34.657 "num_base_bdevs": 2, 00:17:34.657 "num_base_bdevs_discovered": 1, 00:17:34.657 "num_base_bdevs_operational": 2, 00:17:34.657 "base_bdevs_list": [ 00:17:34.657 { 00:17:34.657 "name": "pt1", 00:17:34.657 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.657 "is_configured": true, 00:17:34.657 "data_offset": 256, 00:17:34.657 "data_size": 7936 00:17:34.657 }, 00:17:34.657 { 00:17:34.657 "name": null, 00:17:34.657 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.657 "is_configured": false, 00:17:34.657 "data_offset": 256, 00:17:34.657 "data_size": 7936 00:17:34.657 } 00:17:34.657 ] 00:17:34.657 }' 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.657 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.227 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:35.227 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:35.227 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:35.227 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:35.227 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.227 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.227 [2024-11-25 15:44:33.742988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:35.227 [2024-11-25 15:44:33.743053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.227 [2024-11-25 15:44:33.743087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:35.227 [2024-11-25 15:44:33.743097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.227 [2024-11-25 15:44:33.743450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.227 [2024-11-25 15:44:33.743484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:35.227 [2024-11-25 15:44:33.743542] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:35.227 [2024-11-25 15:44:33.743561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:35.227 [2024-11-25 15:44:33.743665] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:35.227 [2024-11-25 15:44:33.743675] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:35.227 [2024-11-25 15:44:33.743893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:35.227 [2024-11-25 15:44:33.744104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:35.228 [2024-11-25 15:44:33.744147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:35.228 [2024-11-25 15:44:33.744306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.228 pt2 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.228 "name": "raid_bdev1", 00:17:35.228 "uuid": "aab5d7e1-ecba-4fa7-8f54-5fff31bd3159", 00:17:35.228 "strip_size_kb": 0, 00:17:35.228 "state": "online", 00:17:35.228 "raid_level": "raid1", 00:17:35.228 "superblock": true, 00:17:35.228 "num_base_bdevs": 2, 00:17:35.228 "num_base_bdevs_discovered": 2, 00:17:35.228 "num_base_bdevs_operational": 2, 00:17:35.228 "base_bdevs_list": [ 00:17:35.228 { 00:17:35.228 "name": "pt1", 00:17:35.228 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:35.228 "is_configured": true, 00:17:35.228 "data_offset": 256, 00:17:35.228 "data_size": 7936 00:17:35.228 }, 00:17:35.228 { 00:17:35.228 "name": "pt2", 00:17:35.228 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.228 "is_configured": true, 00:17:35.228 "data_offset": 256, 00:17:35.228 "data_size": 7936 00:17:35.228 } 00:17:35.228 ] 00:17:35.228 }' 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.228 15:44:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.798 [2024-11-25 15:44:34.214418] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:35.798 "name": "raid_bdev1", 00:17:35.798 "aliases": [ 00:17:35.798 "aab5d7e1-ecba-4fa7-8f54-5fff31bd3159" 00:17:35.798 ], 00:17:35.798 "product_name": "Raid Volume", 00:17:35.798 "block_size": 4096, 00:17:35.798 "num_blocks": 7936, 00:17:35.798 "uuid": "aab5d7e1-ecba-4fa7-8f54-5fff31bd3159", 00:17:35.798 "assigned_rate_limits": { 00:17:35.798 "rw_ios_per_sec": 0, 00:17:35.798 "rw_mbytes_per_sec": 0, 00:17:35.798 "r_mbytes_per_sec": 0, 00:17:35.798 "w_mbytes_per_sec": 0 00:17:35.798 }, 00:17:35.798 "claimed": false, 00:17:35.798 "zoned": false, 00:17:35.798 "supported_io_types": { 00:17:35.798 "read": true, 00:17:35.798 "write": true, 00:17:35.798 "unmap": false, 00:17:35.798 "flush": false, 00:17:35.798 "reset": true, 00:17:35.798 "nvme_admin": false, 00:17:35.798 "nvme_io": false, 00:17:35.798 "nvme_io_md": false, 00:17:35.798 "write_zeroes": true, 00:17:35.798 "zcopy": false, 00:17:35.798 "get_zone_info": false, 00:17:35.798 "zone_management": false, 00:17:35.798 "zone_append": false, 00:17:35.798 "compare": false, 00:17:35.798 "compare_and_write": false, 00:17:35.798 "abort": false, 00:17:35.798 "seek_hole": false, 00:17:35.798 "seek_data": false, 00:17:35.798 "copy": false, 00:17:35.798 "nvme_iov_md": false 00:17:35.798 }, 00:17:35.798 "memory_domains": [ 00:17:35.798 { 00:17:35.798 "dma_device_id": "system", 00:17:35.798 "dma_device_type": 1 00:17:35.798 }, 00:17:35.798 { 00:17:35.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.798 "dma_device_type": 2 00:17:35.798 }, 00:17:35.798 { 00:17:35.798 "dma_device_id": "system", 00:17:35.798 "dma_device_type": 1 00:17:35.798 }, 00:17:35.798 { 00:17:35.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.798 "dma_device_type": 2 00:17:35.798 } 00:17:35.798 ], 00:17:35.798 "driver_specific": { 00:17:35.798 "raid": { 00:17:35.798 "uuid": "aab5d7e1-ecba-4fa7-8f54-5fff31bd3159", 00:17:35.798 "strip_size_kb": 0, 00:17:35.798 "state": "online", 00:17:35.798 "raid_level": "raid1", 00:17:35.798 "superblock": true, 00:17:35.798 "num_base_bdevs": 2, 00:17:35.798 "num_base_bdevs_discovered": 2, 00:17:35.798 "num_base_bdevs_operational": 2, 00:17:35.798 "base_bdevs_list": [ 00:17:35.798 { 00:17:35.798 "name": "pt1", 00:17:35.798 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:35.798 "is_configured": true, 00:17:35.798 "data_offset": 256, 00:17:35.798 "data_size": 7936 00:17:35.798 }, 00:17:35.798 { 00:17:35.798 "name": "pt2", 00:17:35.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.798 "is_configured": true, 00:17:35.798 "data_offset": 256, 00:17:35.798 "data_size": 7936 00:17:35.798 } 00:17:35.798 ] 00:17:35.798 } 00:17:35.798 } 00:17:35.798 }' 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:35.798 pt2' 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.798 [2024-11-25 15:44:34.438033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' aab5d7e1-ecba-4fa7-8f54-5fff31bd3159 '!=' aab5d7e1-ecba-4fa7-8f54-5fff31bd3159 ']' 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:35.798 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.058 [2024-11-25 15:44:34.485769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.058 "name": "raid_bdev1", 00:17:36.058 "uuid": "aab5d7e1-ecba-4fa7-8f54-5fff31bd3159", 00:17:36.058 "strip_size_kb": 0, 00:17:36.058 "state": "online", 00:17:36.058 "raid_level": "raid1", 00:17:36.058 "superblock": true, 00:17:36.058 "num_base_bdevs": 2, 00:17:36.058 "num_base_bdevs_discovered": 1, 00:17:36.058 "num_base_bdevs_operational": 1, 00:17:36.058 "base_bdevs_list": [ 00:17:36.058 { 00:17:36.058 "name": null, 00:17:36.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.058 "is_configured": false, 00:17:36.058 "data_offset": 0, 00:17:36.058 "data_size": 7936 00:17:36.058 }, 00:17:36.058 { 00:17:36.058 "name": "pt2", 00:17:36.058 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.058 "is_configured": true, 00:17:36.058 "data_offset": 256, 00:17:36.058 "data_size": 7936 00:17:36.058 } 00:17:36.058 ] 00:17:36.058 }' 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.058 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.319 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:36.319 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.319 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.319 [2024-11-25 15:44:34.933016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.319 [2024-11-25 15:44:34.933036] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.319 [2024-11-25 15:44:34.933089] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.319 [2024-11-25 15:44:34.933127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.319 [2024-11-25 15:44:34.933137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:36.319 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.319 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.319 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.319 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:36.319 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.319 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.319 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:36.319 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:36.319 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:36.319 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:36.319 15:44:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:36.319 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.319 15:44:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.578 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.578 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:36.578 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:36.578 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:36.578 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:36.578 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:36.578 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:36.578 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.578 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.578 [2024-11-25 15:44:35.008878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:36.578 [2024-11-25 15:44:35.008978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.578 [2024-11-25 15:44:35.009021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:36.578 [2024-11-25 15:44:35.009056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.578 [2024-11-25 15:44:35.011206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.578 [2024-11-25 15:44:35.011276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:36.578 [2024-11-25 15:44:35.011371] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:36.578 [2024-11-25 15:44:35.011449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:36.578 [2024-11-25 15:44:35.011584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:36.578 [2024-11-25 15:44:35.011626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:36.578 [2024-11-25 15:44:35.011845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:36.578 [2024-11-25 15:44:35.012036] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:36.578 [2024-11-25 15:44:35.012079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:36.578 [2024-11-25 15:44:35.012241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.578 pt2 00:17:36.578 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.578 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.578 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.578 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.578 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.579 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.579 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.579 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.579 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.579 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.579 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.579 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.579 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.579 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.579 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.579 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.579 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.579 "name": "raid_bdev1", 00:17:36.579 "uuid": "aab5d7e1-ecba-4fa7-8f54-5fff31bd3159", 00:17:36.579 "strip_size_kb": 0, 00:17:36.579 "state": "online", 00:17:36.579 "raid_level": "raid1", 00:17:36.579 "superblock": true, 00:17:36.579 "num_base_bdevs": 2, 00:17:36.579 "num_base_bdevs_discovered": 1, 00:17:36.579 "num_base_bdevs_operational": 1, 00:17:36.579 "base_bdevs_list": [ 00:17:36.579 { 00:17:36.579 "name": null, 00:17:36.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.579 "is_configured": false, 00:17:36.579 "data_offset": 256, 00:17:36.579 "data_size": 7936 00:17:36.579 }, 00:17:36.579 { 00:17:36.579 "name": "pt2", 00:17:36.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.579 "is_configured": true, 00:17:36.579 "data_offset": 256, 00:17:36.579 "data_size": 7936 00:17:36.579 } 00:17:36.579 ] 00:17:36.579 }' 00:17:36.579 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.579 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.839 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:36.839 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.839 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.839 [2024-11-25 15:44:35.456074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.839 [2024-11-25 15:44:35.456138] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.839 [2024-11-25 15:44:35.456220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.839 [2024-11-25 15:44:35.456274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.839 [2024-11-25 15:44:35.456305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:36.839 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.839 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.839 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.839 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.839 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:36.839 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.839 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:36.839 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:36.839 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:36.839 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:36.839 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.839 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.099 [2024-11-25 15:44:35.519984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:37.099 [2024-11-25 15:44:35.520081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.099 [2024-11-25 15:44:35.520100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:37.099 [2024-11-25 15:44:35.520109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.099 [2024-11-25 15:44:35.522166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.099 [2024-11-25 15:44:35.522195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:37.099 [2024-11-25 15:44:35.522257] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:37.099 [2024-11-25 15:44:35.522302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:37.099 [2024-11-25 15:44:35.522420] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:37.099 [2024-11-25 15:44:35.522429] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:37.099 [2024-11-25 15:44:35.522442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:37.099 [2024-11-25 15:44:35.522512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:37.099 [2024-11-25 15:44:35.522580] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:37.099 [2024-11-25 15:44:35.522587] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:37.099 [2024-11-25 15:44:35.522811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:37.099 [2024-11-25 15:44:35.522950] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:37.099 [2024-11-25 15:44:35.522961] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:37.099 [2024-11-25 15:44:35.523112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.099 pt1 00:17:37.099 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.099 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:37.099 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:37.099 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.099 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.099 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.099 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.099 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:37.099 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.099 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.099 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.099 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.099 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.099 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.099 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.099 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.099 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.099 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.099 "name": "raid_bdev1", 00:17:37.099 "uuid": "aab5d7e1-ecba-4fa7-8f54-5fff31bd3159", 00:17:37.099 "strip_size_kb": 0, 00:17:37.099 "state": "online", 00:17:37.099 "raid_level": "raid1", 00:17:37.099 "superblock": true, 00:17:37.099 "num_base_bdevs": 2, 00:17:37.099 "num_base_bdevs_discovered": 1, 00:17:37.099 "num_base_bdevs_operational": 1, 00:17:37.099 "base_bdevs_list": [ 00:17:37.099 { 00:17:37.099 "name": null, 00:17:37.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.099 "is_configured": false, 00:17:37.099 "data_offset": 256, 00:17:37.099 "data_size": 7936 00:17:37.099 }, 00:17:37.099 { 00:17:37.099 "name": "pt2", 00:17:37.100 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:37.100 "is_configured": true, 00:17:37.100 "data_offset": 256, 00:17:37.100 "data_size": 7936 00:17:37.100 } 00:17:37.100 ] 00:17:37.100 }' 00:17:37.100 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.100 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.360 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:37.360 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:37.360 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.360 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.360 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.360 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:37.360 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:37.360 15:44:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:37.360 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.360 15:44:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.360 [2024-11-25 15:44:35.995371] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.360 15:44:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.360 15:44:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' aab5d7e1-ecba-4fa7-8f54-5fff31bd3159 '!=' aab5d7e1-ecba-4fa7-8f54-5fff31bd3159 ']' 00:17:37.360 15:44:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 85780 00:17:37.360 15:44:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 85780 ']' 00:17:37.360 15:44:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 85780 00:17:37.360 15:44:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:37.360 15:44:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.360 15:44:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85780 00:17:37.620 15:44:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:37.620 killing process with pid 85780 00:17:37.620 15:44:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:37.620 15:44:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85780' 00:17:37.620 15:44:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 85780 00:17:37.620 [2024-11-25 15:44:36.060250] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:37.620 [2024-11-25 15:44:36.060313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.620 [2024-11-25 15:44:36.060351] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.620 [2024-11-25 15:44:36.060364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:37.620 15:44:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 85780 00:17:37.620 [2024-11-25 15:44:36.254957] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:39.003 15:44:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:39.003 ************************************ 00:17:39.003 END TEST raid_superblock_test_4k 00:17:39.003 00:17:39.003 real 0m5.939s 00:17:39.003 user 0m9.018s 00:17:39.003 sys 0m1.122s 00:17:39.003 15:44:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.003 15:44:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.003 ************************************ 00:17:39.003 15:44:37 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:39.003 15:44:37 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:39.003 15:44:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:39.003 15:44:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.003 15:44:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:39.003 ************************************ 00:17:39.003 START TEST raid_rebuild_test_sb_4k 00:17:39.003 ************************************ 00:17:39.003 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:39.003 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:39.003 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:39.003 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:39.003 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:39.003 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:39.003 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:39.003 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:39.003 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:39.003 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:39.003 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:39.003 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:39.003 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:39.003 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:39.003 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:39.003 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:39.003 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:39.003 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:39.003 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:39.004 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:39.004 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:39.004 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:39.004 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:39.004 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:39.004 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:39.004 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86108 00:17:39.004 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:39.004 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86108 00:17:39.004 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86108 ']' 00:17:39.004 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.004 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.004 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.004 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.004 15:44:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.004 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:39.004 Zero copy mechanism will not be used. 00:17:39.004 [2024-11-25 15:44:37.451119] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:17:39.004 [2024-11-25 15:44:37.451222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86108 ] 00:17:39.004 [2024-11-25 15:44:37.623601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.264 [2024-11-25 15:44:37.729089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.264 [2024-11-25 15:44:37.916000] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:39.264 [2024-11-25 15:44:37.916054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.835 BaseBdev1_malloc 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.835 [2024-11-25 15:44:38.295359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:39.835 [2024-11-25 15:44:38.295450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.835 [2024-11-25 15:44:38.295484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:39.835 [2024-11-25 15:44:38.295496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.835 [2024-11-25 15:44:38.297478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.835 [2024-11-25 15:44:38.297601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:39.835 BaseBdev1 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.835 BaseBdev2_malloc 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.835 [2024-11-25 15:44:38.348837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:39.835 [2024-11-25 15:44:38.348900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.835 [2024-11-25 15:44:38.348918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:39.835 [2024-11-25 15:44:38.348931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.835 [2024-11-25 15:44:38.350916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.835 [2024-11-25 15:44:38.350955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:39.835 BaseBdev2 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:39.835 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.836 spare_malloc 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.836 spare_delay 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.836 [2024-11-25 15:44:38.425577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:39.836 [2024-11-25 15:44:38.425637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.836 [2024-11-25 15:44:38.425671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:39.836 [2024-11-25 15:44:38.425681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.836 [2024-11-25 15:44:38.427654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.836 [2024-11-25 15:44:38.427696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:39.836 spare 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.836 [2024-11-25 15:44:38.437614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:39.836 [2024-11-25 15:44:38.439307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.836 [2024-11-25 15:44:38.439483] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:39.836 [2024-11-25 15:44:38.439500] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:39.836 [2024-11-25 15:44:38.439733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:39.836 [2024-11-25 15:44:38.439886] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:39.836 [2024-11-25 15:44:38.439895] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:39.836 [2024-11-25 15:44:38.440038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.836 "name": "raid_bdev1", 00:17:39.836 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:39.836 "strip_size_kb": 0, 00:17:39.836 "state": "online", 00:17:39.836 "raid_level": "raid1", 00:17:39.836 "superblock": true, 00:17:39.836 "num_base_bdevs": 2, 00:17:39.836 "num_base_bdevs_discovered": 2, 00:17:39.836 "num_base_bdevs_operational": 2, 00:17:39.836 "base_bdevs_list": [ 00:17:39.836 { 00:17:39.836 "name": "BaseBdev1", 00:17:39.836 "uuid": "91360592-2d13-5719-b9bc-20e92a76ad3e", 00:17:39.836 "is_configured": true, 00:17:39.836 "data_offset": 256, 00:17:39.836 "data_size": 7936 00:17:39.836 }, 00:17:39.836 { 00:17:39.836 "name": "BaseBdev2", 00:17:39.836 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:39.836 "is_configured": true, 00:17:39.836 "data_offset": 256, 00:17:39.836 "data_size": 7936 00:17:39.836 } 00:17:39.836 ] 00:17:39.836 }' 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.836 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.407 [2024-11-25 15:44:38.861213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:40.407 15:44:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:40.668 [2024-11-25 15:44:39.116708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:40.668 /dev/nbd0 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:40.668 1+0 records in 00:17:40.668 1+0 records out 00:17:40.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522823 s, 7.8 MB/s 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:40.668 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:41.238 7936+0 records in 00:17:41.238 7936+0 records out 00:17:41.238 32505856 bytes (33 MB, 31 MiB) copied, 0.628074 s, 51.8 MB/s 00:17:41.238 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:41.238 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:41.238 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:41.238 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:41.238 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:41.238 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:41.238 15:44:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:41.498 [2024-11-25 15:44:40.027378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.498 [2024-11-25 15:44:40.039427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.498 "name": "raid_bdev1", 00:17:41.498 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:41.498 "strip_size_kb": 0, 00:17:41.498 "state": "online", 00:17:41.498 "raid_level": "raid1", 00:17:41.498 "superblock": true, 00:17:41.498 "num_base_bdevs": 2, 00:17:41.498 "num_base_bdevs_discovered": 1, 00:17:41.498 "num_base_bdevs_operational": 1, 00:17:41.498 "base_bdevs_list": [ 00:17:41.498 { 00:17:41.498 "name": null, 00:17:41.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.498 "is_configured": false, 00:17:41.498 "data_offset": 0, 00:17:41.498 "data_size": 7936 00:17:41.498 }, 00:17:41.498 { 00:17:41.498 "name": "BaseBdev2", 00:17:41.498 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:41.498 "is_configured": true, 00:17:41.498 "data_offset": 256, 00:17:41.498 "data_size": 7936 00:17:41.498 } 00:17:41.498 ] 00:17:41.498 }' 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.498 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.069 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:42.069 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.069 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.069 [2024-11-25 15:44:40.470686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:42.069 [2024-11-25 15:44:40.487621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:42.069 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.069 15:44:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:42.069 [2024-11-25 15:44:40.489379] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:43.009 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.009 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.009 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.009 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.009 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.009 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.009 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.009 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.009 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.009 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.009 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.009 "name": "raid_bdev1", 00:17:43.009 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:43.009 "strip_size_kb": 0, 00:17:43.009 "state": "online", 00:17:43.009 "raid_level": "raid1", 00:17:43.009 "superblock": true, 00:17:43.009 "num_base_bdevs": 2, 00:17:43.009 "num_base_bdevs_discovered": 2, 00:17:43.009 "num_base_bdevs_operational": 2, 00:17:43.009 "process": { 00:17:43.009 "type": "rebuild", 00:17:43.009 "target": "spare", 00:17:43.009 "progress": { 00:17:43.009 "blocks": 2560, 00:17:43.009 "percent": 32 00:17:43.009 } 00:17:43.009 }, 00:17:43.009 "base_bdevs_list": [ 00:17:43.009 { 00:17:43.009 "name": "spare", 00:17:43.009 "uuid": "f686243e-e598-5552-a9fa-3076db469d99", 00:17:43.009 "is_configured": true, 00:17:43.009 "data_offset": 256, 00:17:43.009 "data_size": 7936 00:17:43.009 }, 00:17:43.009 { 00:17:43.009 "name": "BaseBdev2", 00:17:43.009 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:43.009 "is_configured": true, 00:17:43.009 "data_offset": 256, 00:17:43.009 "data_size": 7936 00:17:43.009 } 00:17:43.009 ] 00:17:43.009 }' 00:17:43.009 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.009 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.009 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.009 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.009 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:43.009 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.009 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.009 [2024-11-25 15:44:41.640952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:43.269 [2024-11-25 15:44:41.693814] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:43.269 [2024-11-25 15:44:41.693872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.269 [2024-11-25 15:44:41.693885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:43.269 [2024-11-25 15:44:41.693894] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:43.269 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.269 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:43.269 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.269 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.269 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.270 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.270 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:43.270 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.270 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.270 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.270 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.270 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.270 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.270 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.270 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.270 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.270 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.270 "name": "raid_bdev1", 00:17:43.270 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:43.270 "strip_size_kb": 0, 00:17:43.270 "state": "online", 00:17:43.270 "raid_level": "raid1", 00:17:43.270 "superblock": true, 00:17:43.270 "num_base_bdevs": 2, 00:17:43.270 "num_base_bdevs_discovered": 1, 00:17:43.270 "num_base_bdevs_operational": 1, 00:17:43.270 "base_bdevs_list": [ 00:17:43.270 { 00:17:43.270 "name": null, 00:17:43.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.270 "is_configured": false, 00:17:43.270 "data_offset": 0, 00:17:43.270 "data_size": 7936 00:17:43.270 }, 00:17:43.270 { 00:17:43.270 "name": "BaseBdev2", 00:17:43.270 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:43.270 "is_configured": true, 00:17:43.270 "data_offset": 256, 00:17:43.270 "data_size": 7936 00:17:43.270 } 00:17:43.270 ] 00:17:43.270 }' 00:17:43.270 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.270 15:44:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.530 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:43.530 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.530 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:43.530 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:43.530 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.530 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.530 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.530 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.530 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.530 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.790 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.790 "name": "raid_bdev1", 00:17:43.790 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:43.790 "strip_size_kb": 0, 00:17:43.790 "state": "online", 00:17:43.790 "raid_level": "raid1", 00:17:43.790 "superblock": true, 00:17:43.790 "num_base_bdevs": 2, 00:17:43.790 "num_base_bdevs_discovered": 1, 00:17:43.790 "num_base_bdevs_operational": 1, 00:17:43.790 "base_bdevs_list": [ 00:17:43.790 { 00:17:43.790 "name": null, 00:17:43.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.790 "is_configured": false, 00:17:43.790 "data_offset": 0, 00:17:43.790 "data_size": 7936 00:17:43.790 }, 00:17:43.790 { 00:17:43.790 "name": "BaseBdev2", 00:17:43.790 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:43.790 "is_configured": true, 00:17:43.790 "data_offset": 256, 00:17:43.790 "data_size": 7936 00:17:43.790 } 00:17:43.790 ] 00:17:43.790 }' 00:17:43.790 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.790 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:43.790 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.790 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:43.790 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:43.790 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.790 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.790 [2024-11-25 15:44:42.319294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:43.790 [2024-11-25 15:44:42.334581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:43.790 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.790 15:44:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:43.790 [2024-11-25 15:44:42.336329] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:44.731 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.731 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.731 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.731 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.731 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.731 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.731 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.731 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.731 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.731 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.731 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.731 "name": "raid_bdev1", 00:17:44.731 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:44.731 "strip_size_kb": 0, 00:17:44.731 "state": "online", 00:17:44.731 "raid_level": "raid1", 00:17:44.731 "superblock": true, 00:17:44.731 "num_base_bdevs": 2, 00:17:44.731 "num_base_bdevs_discovered": 2, 00:17:44.731 "num_base_bdevs_operational": 2, 00:17:44.731 "process": { 00:17:44.731 "type": "rebuild", 00:17:44.731 "target": "spare", 00:17:44.731 "progress": { 00:17:44.731 "blocks": 2560, 00:17:44.731 "percent": 32 00:17:44.731 } 00:17:44.731 }, 00:17:44.731 "base_bdevs_list": [ 00:17:44.731 { 00:17:44.731 "name": "spare", 00:17:44.731 "uuid": "f686243e-e598-5552-a9fa-3076db469d99", 00:17:44.731 "is_configured": true, 00:17:44.731 "data_offset": 256, 00:17:44.731 "data_size": 7936 00:17:44.731 }, 00:17:44.731 { 00:17:44.731 "name": "BaseBdev2", 00:17:44.731 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:44.731 "is_configured": true, 00:17:44.731 "data_offset": 256, 00:17:44.731 "data_size": 7936 00:17:44.731 } 00:17:44.731 ] 00:17:44.731 }' 00:17:44.731 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:44.992 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=656 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.992 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.992 "name": "raid_bdev1", 00:17:44.992 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:44.992 "strip_size_kb": 0, 00:17:44.992 "state": "online", 00:17:44.992 "raid_level": "raid1", 00:17:44.992 "superblock": true, 00:17:44.992 "num_base_bdevs": 2, 00:17:44.992 "num_base_bdevs_discovered": 2, 00:17:44.992 "num_base_bdevs_operational": 2, 00:17:44.992 "process": { 00:17:44.992 "type": "rebuild", 00:17:44.992 "target": "spare", 00:17:44.992 "progress": { 00:17:44.992 "blocks": 2816, 00:17:44.992 "percent": 35 00:17:44.992 } 00:17:44.992 }, 00:17:44.992 "base_bdevs_list": [ 00:17:44.993 { 00:17:44.993 "name": "spare", 00:17:44.993 "uuid": "f686243e-e598-5552-a9fa-3076db469d99", 00:17:44.993 "is_configured": true, 00:17:44.993 "data_offset": 256, 00:17:44.993 "data_size": 7936 00:17:44.993 }, 00:17:44.993 { 00:17:44.993 "name": "BaseBdev2", 00:17:44.993 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:44.993 "is_configured": true, 00:17:44.993 "data_offset": 256, 00:17:44.993 "data_size": 7936 00:17:44.993 } 00:17:44.993 ] 00:17:44.993 }' 00:17:44.993 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.993 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.993 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.993 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.993 15:44:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:46.376 15:44:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:46.376 15:44:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.376 15:44:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.376 15:44:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.376 15:44:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.376 15:44:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.376 15:44:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.376 15:44:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.376 15:44:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.376 15:44:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.376 15:44:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.376 15:44:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.376 "name": "raid_bdev1", 00:17:46.376 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:46.376 "strip_size_kb": 0, 00:17:46.376 "state": "online", 00:17:46.376 "raid_level": "raid1", 00:17:46.376 "superblock": true, 00:17:46.376 "num_base_bdevs": 2, 00:17:46.376 "num_base_bdevs_discovered": 2, 00:17:46.376 "num_base_bdevs_operational": 2, 00:17:46.376 "process": { 00:17:46.376 "type": "rebuild", 00:17:46.376 "target": "spare", 00:17:46.376 "progress": { 00:17:46.376 "blocks": 5888, 00:17:46.376 "percent": 74 00:17:46.376 } 00:17:46.376 }, 00:17:46.376 "base_bdevs_list": [ 00:17:46.376 { 00:17:46.376 "name": "spare", 00:17:46.376 "uuid": "f686243e-e598-5552-a9fa-3076db469d99", 00:17:46.376 "is_configured": true, 00:17:46.376 "data_offset": 256, 00:17:46.376 "data_size": 7936 00:17:46.376 }, 00:17:46.376 { 00:17:46.376 "name": "BaseBdev2", 00:17:46.376 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:46.376 "is_configured": true, 00:17:46.376 "data_offset": 256, 00:17:46.376 "data_size": 7936 00:17:46.376 } 00:17:46.376 ] 00:17:46.376 }' 00:17:46.376 15:44:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.376 15:44:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.376 15:44:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.376 15:44:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.376 15:44:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:46.948 [2024-11-25 15:44:45.447425] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:46.948 [2024-11-25 15:44:45.447496] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:46.948 [2024-11-25 15:44:45.447583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.207 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:47.207 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.208 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.208 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.208 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.208 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.208 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.208 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.208 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.208 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.208 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.208 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.208 "name": "raid_bdev1", 00:17:47.208 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:47.208 "strip_size_kb": 0, 00:17:47.208 "state": "online", 00:17:47.208 "raid_level": "raid1", 00:17:47.208 "superblock": true, 00:17:47.208 "num_base_bdevs": 2, 00:17:47.208 "num_base_bdevs_discovered": 2, 00:17:47.208 "num_base_bdevs_operational": 2, 00:17:47.208 "base_bdevs_list": [ 00:17:47.208 { 00:17:47.208 "name": "spare", 00:17:47.208 "uuid": "f686243e-e598-5552-a9fa-3076db469d99", 00:17:47.208 "is_configured": true, 00:17:47.208 "data_offset": 256, 00:17:47.208 "data_size": 7936 00:17:47.208 }, 00:17:47.208 { 00:17:47.208 "name": "BaseBdev2", 00:17:47.208 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:47.208 "is_configured": true, 00:17:47.208 "data_offset": 256, 00:17:47.208 "data_size": 7936 00:17:47.208 } 00:17:47.208 ] 00:17:47.208 }' 00:17:47.208 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.208 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:47.208 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.468 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:47.468 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:47.468 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:47.468 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.468 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:47.468 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:47.468 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.468 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.468 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.468 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.468 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.468 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.468 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.468 "name": "raid_bdev1", 00:17:47.468 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:47.468 "strip_size_kb": 0, 00:17:47.468 "state": "online", 00:17:47.468 "raid_level": "raid1", 00:17:47.468 "superblock": true, 00:17:47.468 "num_base_bdevs": 2, 00:17:47.468 "num_base_bdevs_discovered": 2, 00:17:47.468 "num_base_bdevs_operational": 2, 00:17:47.468 "base_bdevs_list": [ 00:17:47.468 { 00:17:47.468 "name": "spare", 00:17:47.468 "uuid": "f686243e-e598-5552-a9fa-3076db469d99", 00:17:47.468 "is_configured": true, 00:17:47.468 "data_offset": 256, 00:17:47.468 "data_size": 7936 00:17:47.468 }, 00:17:47.468 { 00:17:47.468 "name": "BaseBdev2", 00:17:47.468 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:47.468 "is_configured": true, 00:17:47.468 "data_offset": 256, 00:17:47.468 "data_size": 7936 00:17:47.468 } 00:17:47.468 ] 00:17:47.468 }' 00:17:47.468 15:44:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.468 "name": "raid_bdev1", 00:17:47.468 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:47.468 "strip_size_kb": 0, 00:17:47.468 "state": "online", 00:17:47.468 "raid_level": "raid1", 00:17:47.468 "superblock": true, 00:17:47.468 "num_base_bdevs": 2, 00:17:47.468 "num_base_bdevs_discovered": 2, 00:17:47.468 "num_base_bdevs_operational": 2, 00:17:47.468 "base_bdevs_list": [ 00:17:47.468 { 00:17:47.468 "name": "spare", 00:17:47.468 "uuid": "f686243e-e598-5552-a9fa-3076db469d99", 00:17:47.468 "is_configured": true, 00:17:47.468 "data_offset": 256, 00:17:47.468 "data_size": 7936 00:17:47.468 }, 00:17:47.468 { 00:17:47.468 "name": "BaseBdev2", 00:17:47.468 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:47.468 "is_configured": true, 00:17:47.468 "data_offset": 256, 00:17:47.468 "data_size": 7936 00:17:47.468 } 00:17:47.468 ] 00:17:47.468 }' 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.468 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.039 [2024-11-25 15:44:46.466312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:48.039 [2024-11-25 15:44:46.466346] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.039 [2024-11-25 15:44:46.466412] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.039 [2024-11-25 15:44:46.466470] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.039 [2024-11-25 15:44:46.466479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:48.039 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:48.039 /dev/nbd0 00:17:48.299 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:48.300 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:48.300 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:48.300 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:48.300 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:48.300 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:48.300 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:48.300 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:48.300 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:48.300 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:48.300 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:48.300 1+0 records in 00:17:48.300 1+0 records out 00:17:48.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042179 s, 9.7 MB/s 00:17:48.300 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.300 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:48.300 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.300 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:48.300 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:48.300 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:48.300 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:48.300 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:48.300 /dev/nbd1 00:17:48.560 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:48.560 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:48.560 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:48.560 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:48.560 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:48.560 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:48.560 15:44:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:48.560 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:48.560 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:48.560 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:48.560 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:48.560 1+0 records in 00:17:48.560 1+0 records out 00:17:48.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360343 s, 11.4 MB/s 00:17:48.560 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.560 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:48.560 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.560 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:48.560 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:48.560 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:48.560 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:48.560 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:48.560 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:48.560 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:48.560 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:48.560 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:48.560 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:48.560 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:48.560 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:48.820 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:48.820 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:48.820 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:48.820 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:48.820 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:48.820 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:48.820 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:48.820 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:48.820 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:48.820 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.080 [2024-11-25 15:44:47.628686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:49.080 [2024-11-25 15:44:47.628739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.080 [2024-11-25 15:44:47.628760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:49.080 [2024-11-25 15:44:47.628768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.080 [2024-11-25 15:44:47.630963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.080 [2024-11-25 15:44:47.631001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:49.080 [2024-11-25 15:44:47.631098] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:49.080 [2024-11-25 15:44:47.631151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.080 [2024-11-25 15:44:47.631298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:49.080 spare 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.080 [2024-11-25 15:44:47.731216] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:49.080 [2024-11-25 15:44:47.731245] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:49.080 [2024-11-25 15:44:47.731483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:49.080 [2024-11-25 15:44:47.731680] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:49.080 [2024-11-25 15:44:47.731700] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:49.080 [2024-11-25 15:44:47.731865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.080 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.340 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.340 "name": "raid_bdev1", 00:17:49.340 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:49.340 "strip_size_kb": 0, 00:17:49.340 "state": "online", 00:17:49.340 "raid_level": "raid1", 00:17:49.340 "superblock": true, 00:17:49.340 "num_base_bdevs": 2, 00:17:49.340 "num_base_bdevs_discovered": 2, 00:17:49.340 "num_base_bdevs_operational": 2, 00:17:49.340 "base_bdevs_list": [ 00:17:49.340 { 00:17:49.340 "name": "spare", 00:17:49.340 "uuid": "f686243e-e598-5552-a9fa-3076db469d99", 00:17:49.340 "is_configured": true, 00:17:49.340 "data_offset": 256, 00:17:49.340 "data_size": 7936 00:17:49.340 }, 00:17:49.340 { 00:17:49.340 "name": "BaseBdev2", 00:17:49.340 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:49.340 "is_configured": true, 00:17:49.340 "data_offset": 256, 00:17:49.340 "data_size": 7936 00:17:49.340 } 00:17:49.340 ] 00:17:49.340 }' 00:17:49.340 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.340 15:44:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.600 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.600 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.600 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.600 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.600 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.600 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.600 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.600 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.600 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.600 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.600 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.600 "name": "raid_bdev1", 00:17:49.601 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:49.601 "strip_size_kb": 0, 00:17:49.601 "state": "online", 00:17:49.601 "raid_level": "raid1", 00:17:49.601 "superblock": true, 00:17:49.601 "num_base_bdevs": 2, 00:17:49.601 "num_base_bdevs_discovered": 2, 00:17:49.601 "num_base_bdevs_operational": 2, 00:17:49.601 "base_bdevs_list": [ 00:17:49.601 { 00:17:49.601 "name": "spare", 00:17:49.601 "uuid": "f686243e-e598-5552-a9fa-3076db469d99", 00:17:49.601 "is_configured": true, 00:17:49.601 "data_offset": 256, 00:17:49.601 "data_size": 7936 00:17:49.601 }, 00:17:49.601 { 00:17:49.601 "name": "BaseBdev2", 00:17:49.601 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:49.601 "is_configured": true, 00:17:49.601 "data_offset": 256, 00:17:49.601 "data_size": 7936 00:17:49.601 } 00:17:49.601 ] 00:17:49.601 }' 00:17:49.601 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.601 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.601 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.861 [2024-11-25 15:44:48.367592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.861 "name": "raid_bdev1", 00:17:49.861 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:49.861 "strip_size_kb": 0, 00:17:49.861 "state": "online", 00:17:49.861 "raid_level": "raid1", 00:17:49.861 "superblock": true, 00:17:49.861 "num_base_bdevs": 2, 00:17:49.861 "num_base_bdevs_discovered": 1, 00:17:49.861 "num_base_bdevs_operational": 1, 00:17:49.861 "base_bdevs_list": [ 00:17:49.861 { 00:17:49.861 "name": null, 00:17:49.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.861 "is_configured": false, 00:17:49.861 "data_offset": 0, 00:17:49.861 "data_size": 7936 00:17:49.861 }, 00:17:49.861 { 00:17:49.861 "name": "BaseBdev2", 00:17:49.861 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:49.861 "is_configured": true, 00:17:49.861 "data_offset": 256, 00:17:49.861 "data_size": 7936 00:17:49.861 } 00:17:49.861 ] 00:17:49.861 }' 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.861 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.122 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:50.122 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.122 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.122 [2024-11-25 15:44:48.795137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:50.122 [2024-11-25 15:44:48.795284] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:50.122 [2024-11-25 15:44:48.795305] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:50.122 [2024-11-25 15:44:48.795335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:50.382 [2024-11-25 15:44:48.810892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:50.382 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.382 15:44:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:50.382 [2024-11-25 15:44:48.812749] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:51.346 15:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.346 15:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.346 15:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:51.347 15:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:51.347 15:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.347 15:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.347 15:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.347 15:44:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.347 15:44:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.347 15:44:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.347 15:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.347 "name": "raid_bdev1", 00:17:51.347 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:51.347 "strip_size_kb": 0, 00:17:51.347 "state": "online", 00:17:51.347 "raid_level": "raid1", 00:17:51.347 "superblock": true, 00:17:51.347 "num_base_bdevs": 2, 00:17:51.347 "num_base_bdevs_discovered": 2, 00:17:51.347 "num_base_bdevs_operational": 2, 00:17:51.347 "process": { 00:17:51.347 "type": "rebuild", 00:17:51.347 "target": "spare", 00:17:51.347 "progress": { 00:17:51.347 "blocks": 2560, 00:17:51.347 "percent": 32 00:17:51.347 } 00:17:51.347 }, 00:17:51.347 "base_bdevs_list": [ 00:17:51.347 { 00:17:51.347 "name": "spare", 00:17:51.347 "uuid": "f686243e-e598-5552-a9fa-3076db469d99", 00:17:51.347 "is_configured": true, 00:17:51.347 "data_offset": 256, 00:17:51.347 "data_size": 7936 00:17:51.347 }, 00:17:51.347 { 00:17:51.347 "name": "BaseBdev2", 00:17:51.347 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:51.347 "is_configured": true, 00:17:51.347 "data_offset": 256, 00:17:51.347 "data_size": 7936 00:17:51.347 } 00:17:51.347 ] 00:17:51.347 }' 00:17:51.347 15:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.347 15:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.347 15:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.347 15:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.347 15:44:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:51.347 15:44:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.347 15:44:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.347 [2024-11-25 15:44:49.972574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.347 [2024-11-25 15:44:50.017469] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:51.347 [2024-11-25 15:44:50.017521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.347 [2024-11-25 15:44:50.017535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.347 [2024-11-25 15:44:50.017543] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:51.622 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.622 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:51.622 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.622 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.622 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.622 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.622 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:51.622 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.622 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.622 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.622 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.622 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.622 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.622 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.622 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.622 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.622 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.622 "name": "raid_bdev1", 00:17:51.622 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:51.622 "strip_size_kb": 0, 00:17:51.622 "state": "online", 00:17:51.622 "raid_level": "raid1", 00:17:51.622 "superblock": true, 00:17:51.622 "num_base_bdevs": 2, 00:17:51.622 "num_base_bdevs_discovered": 1, 00:17:51.622 "num_base_bdevs_operational": 1, 00:17:51.622 "base_bdevs_list": [ 00:17:51.622 { 00:17:51.622 "name": null, 00:17:51.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.622 "is_configured": false, 00:17:51.622 "data_offset": 0, 00:17:51.622 "data_size": 7936 00:17:51.622 }, 00:17:51.622 { 00:17:51.622 "name": "BaseBdev2", 00:17:51.622 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:51.622 "is_configured": true, 00:17:51.622 "data_offset": 256, 00:17:51.622 "data_size": 7936 00:17:51.622 } 00:17:51.622 ] 00:17:51.622 }' 00:17:51.622 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.622 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.882 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:51.882 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.882 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.882 [2024-11-25 15:44:50.497696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:51.882 [2024-11-25 15:44:50.497750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.882 [2024-11-25 15:44:50.497768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:51.882 [2024-11-25 15:44:50.497779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.882 [2024-11-25 15:44:50.498251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.882 [2024-11-25 15:44:50.498282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:51.882 [2024-11-25 15:44:50.498358] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:51.882 [2024-11-25 15:44:50.498382] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:51.882 [2024-11-25 15:44:50.498391] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:51.882 [2024-11-25 15:44:50.498418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:51.882 [2024-11-25 15:44:50.512959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:51.882 spare 00:17:51.882 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.882 15:44:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:51.882 [2024-11-25 15:44:50.514767] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:53.264 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.264 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.264 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:53.264 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:53.264 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.264 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.264 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.264 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.264 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.264 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.264 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.264 "name": "raid_bdev1", 00:17:53.264 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:53.264 "strip_size_kb": 0, 00:17:53.264 "state": "online", 00:17:53.264 "raid_level": "raid1", 00:17:53.264 "superblock": true, 00:17:53.264 "num_base_bdevs": 2, 00:17:53.264 "num_base_bdevs_discovered": 2, 00:17:53.264 "num_base_bdevs_operational": 2, 00:17:53.264 "process": { 00:17:53.264 "type": "rebuild", 00:17:53.264 "target": "spare", 00:17:53.264 "progress": { 00:17:53.264 "blocks": 2560, 00:17:53.264 "percent": 32 00:17:53.264 } 00:17:53.264 }, 00:17:53.264 "base_bdevs_list": [ 00:17:53.264 { 00:17:53.264 "name": "spare", 00:17:53.264 "uuid": "f686243e-e598-5552-a9fa-3076db469d99", 00:17:53.264 "is_configured": true, 00:17:53.264 "data_offset": 256, 00:17:53.264 "data_size": 7936 00:17:53.264 }, 00:17:53.264 { 00:17:53.264 "name": "BaseBdev2", 00:17:53.264 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:53.264 "is_configured": true, 00:17:53.264 "data_offset": 256, 00:17:53.264 "data_size": 7936 00:17:53.264 } 00:17:53.264 ] 00:17:53.264 }' 00:17:53.264 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.264 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:53.264 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.264 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:53.264 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.265 [2024-11-25 15:44:51.682308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:53.265 [2024-11-25 15:44:51.719183] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:53.265 [2024-11-25 15:44:51.719234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.265 [2024-11-25 15:44:51.719249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:53.265 [2024-11-25 15:44:51.719256] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.265 "name": "raid_bdev1", 00:17:53.265 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:53.265 "strip_size_kb": 0, 00:17:53.265 "state": "online", 00:17:53.265 "raid_level": "raid1", 00:17:53.265 "superblock": true, 00:17:53.265 "num_base_bdevs": 2, 00:17:53.265 "num_base_bdevs_discovered": 1, 00:17:53.265 "num_base_bdevs_operational": 1, 00:17:53.265 "base_bdevs_list": [ 00:17:53.265 { 00:17:53.265 "name": null, 00:17:53.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.265 "is_configured": false, 00:17:53.265 "data_offset": 0, 00:17:53.265 "data_size": 7936 00:17:53.265 }, 00:17:53.265 { 00:17:53.265 "name": "BaseBdev2", 00:17:53.265 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:53.265 "is_configured": true, 00:17:53.265 "data_offset": 256, 00:17:53.265 "data_size": 7936 00:17:53.265 } 00:17:53.265 ] 00:17:53.265 }' 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.265 15:44:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.525 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:53.525 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.525 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:53.525 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:53.525 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.525 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.525 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.525 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.525 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.785 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.785 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.785 "name": "raid_bdev1", 00:17:53.785 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:53.785 "strip_size_kb": 0, 00:17:53.785 "state": "online", 00:17:53.785 "raid_level": "raid1", 00:17:53.785 "superblock": true, 00:17:53.785 "num_base_bdevs": 2, 00:17:53.785 "num_base_bdevs_discovered": 1, 00:17:53.785 "num_base_bdevs_operational": 1, 00:17:53.785 "base_bdevs_list": [ 00:17:53.785 { 00:17:53.785 "name": null, 00:17:53.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.785 "is_configured": false, 00:17:53.785 "data_offset": 0, 00:17:53.785 "data_size": 7936 00:17:53.785 }, 00:17:53.785 { 00:17:53.785 "name": "BaseBdev2", 00:17:53.785 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:53.785 "is_configured": true, 00:17:53.785 "data_offset": 256, 00:17:53.785 "data_size": 7936 00:17:53.785 } 00:17:53.785 ] 00:17:53.785 }' 00:17:53.785 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.785 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:53.785 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.785 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:53.785 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:53.785 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.785 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.785 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.785 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:53.785 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.785 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.785 [2024-11-25 15:44:52.335266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:53.785 [2024-11-25 15:44:52.335317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.785 [2024-11-25 15:44:52.335338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:53.785 [2024-11-25 15:44:52.335354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.785 [2024-11-25 15:44:52.335774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.785 [2024-11-25 15:44:52.335792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:53.785 [2024-11-25 15:44:52.335863] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:53.785 [2024-11-25 15:44:52.335876] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:53.785 [2024-11-25 15:44:52.335885] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:53.785 [2024-11-25 15:44:52.335895] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:53.785 BaseBdev1 00:17:53.785 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.785 15:44:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:54.726 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:54.726 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.726 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.726 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.726 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.726 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:54.726 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.726 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.726 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.726 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.726 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.726 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.726 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.726 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.726 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.726 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.726 "name": "raid_bdev1", 00:17:54.726 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:54.726 "strip_size_kb": 0, 00:17:54.726 "state": "online", 00:17:54.726 "raid_level": "raid1", 00:17:54.726 "superblock": true, 00:17:54.726 "num_base_bdevs": 2, 00:17:54.726 "num_base_bdevs_discovered": 1, 00:17:54.726 "num_base_bdevs_operational": 1, 00:17:54.726 "base_bdevs_list": [ 00:17:54.726 { 00:17:54.726 "name": null, 00:17:54.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.726 "is_configured": false, 00:17:54.726 "data_offset": 0, 00:17:54.726 "data_size": 7936 00:17:54.726 }, 00:17:54.726 { 00:17:54.726 "name": "BaseBdev2", 00:17:54.726 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:54.726 "is_configured": true, 00:17:54.726 "data_offset": 256, 00:17:54.726 "data_size": 7936 00:17:54.726 } 00:17:54.726 ] 00:17:54.726 }' 00:17:54.726 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.726 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.296 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:55.296 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.296 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:55.296 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:55.296 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.297 "name": "raid_bdev1", 00:17:55.297 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:55.297 "strip_size_kb": 0, 00:17:55.297 "state": "online", 00:17:55.297 "raid_level": "raid1", 00:17:55.297 "superblock": true, 00:17:55.297 "num_base_bdevs": 2, 00:17:55.297 "num_base_bdevs_discovered": 1, 00:17:55.297 "num_base_bdevs_operational": 1, 00:17:55.297 "base_bdevs_list": [ 00:17:55.297 { 00:17:55.297 "name": null, 00:17:55.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.297 "is_configured": false, 00:17:55.297 "data_offset": 0, 00:17:55.297 "data_size": 7936 00:17:55.297 }, 00:17:55.297 { 00:17:55.297 "name": "BaseBdev2", 00:17:55.297 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:55.297 "is_configured": true, 00:17:55.297 "data_offset": 256, 00:17:55.297 "data_size": 7936 00:17:55.297 } 00:17:55.297 ] 00:17:55.297 }' 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.297 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.297 [2024-11-25 15:44:53.968545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.297 [2024-11-25 15:44:53.968676] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:55.297 [2024-11-25 15:44:53.968689] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:55.557 request: 00:17:55.557 { 00:17:55.557 "base_bdev": "BaseBdev1", 00:17:55.557 "raid_bdev": "raid_bdev1", 00:17:55.557 "method": "bdev_raid_add_base_bdev", 00:17:55.557 "req_id": 1 00:17:55.557 } 00:17:55.557 Got JSON-RPC error response 00:17:55.557 response: 00:17:55.557 { 00:17:55.557 "code": -22, 00:17:55.557 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:55.557 } 00:17:55.557 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:55.557 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:55.557 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:55.557 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:55.557 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:55.557 15:44:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:56.498 15:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.498 15:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.498 15:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.498 15:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.498 15:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.498 15:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:56.498 15:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.498 15:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.498 15:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.498 15:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.498 15:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.498 15:44:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.498 15:44:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.498 15:44:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.498 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.498 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.498 "name": "raid_bdev1", 00:17:56.498 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:56.498 "strip_size_kb": 0, 00:17:56.498 "state": "online", 00:17:56.498 "raid_level": "raid1", 00:17:56.498 "superblock": true, 00:17:56.498 "num_base_bdevs": 2, 00:17:56.498 "num_base_bdevs_discovered": 1, 00:17:56.498 "num_base_bdevs_operational": 1, 00:17:56.498 "base_bdevs_list": [ 00:17:56.498 { 00:17:56.498 "name": null, 00:17:56.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.498 "is_configured": false, 00:17:56.498 "data_offset": 0, 00:17:56.498 "data_size": 7936 00:17:56.498 }, 00:17:56.498 { 00:17:56.498 "name": "BaseBdev2", 00:17:56.498 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:56.498 "is_configured": true, 00:17:56.498 "data_offset": 256, 00:17:56.498 "data_size": 7936 00:17:56.498 } 00:17:56.498 ] 00:17:56.498 }' 00:17:56.498 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.498 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.758 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.759 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.759 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.759 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.759 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.759 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.759 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.759 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.759 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.759 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.019 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.019 "name": "raid_bdev1", 00:17:57.019 "uuid": "7b502cc8-296a-4092-a5aa-7e9527c539db", 00:17:57.019 "strip_size_kb": 0, 00:17:57.019 "state": "online", 00:17:57.019 "raid_level": "raid1", 00:17:57.019 "superblock": true, 00:17:57.019 "num_base_bdevs": 2, 00:17:57.019 "num_base_bdevs_discovered": 1, 00:17:57.019 "num_base_bdevs_operational": 1, 00:17:57.019 "base_bdevs_list": [ 00:17:57.019 { 00:17:57.019 "name": null, 00:17:57.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.019 "is_configured": false, 00:17:57.019 "data_offset": 0, 00:17:57.019 "data_size": 7936 00:17:57.019 }, 00:17:57.019 { 00:17:57.019 "name": "BaseBdev2", 00:17:57.019 "uuid": "ddd941e6-108c-5351-84c9-e726e55b9083", 00:17:57.019 "is_configured": true, 00:17:57.019 "data_offset": 256, 00:17:57.019 "data_size": 7936 00:17:57.019 } 00:17:57.019 ] 00:17:57.019 }' 00:17:57.019 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.019 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:57.019 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.019 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:57.019 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86108 00:17:57.019 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86108 ']' 00:17:57.019 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86108 00:17:57.019 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:57.019 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.019 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86108 00:17:57.019 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.019 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.019 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86108' 00:17:57.019 killing process with pid 86108 00:17:57.019 Received shutdown signal, test time was about 60.000000 seconds 00:17:57.019 00:17:57.019 Latency(us) 00:17:57.019 [2024-11-25T15:44:55.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.019 [2024-11-25T15:44:55.700Z] =================================================================================================================== 00:17:57.019 [2024-11-25T15:44:55.700Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:57.019 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86108 00:17:57.019 [2024-11-25 15:44:55.589977] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:57.019 [2024-11-25 15:44:55.590085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.019 [2024-11-25 15:44:55.590129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.019 [2024-11-25 15:44:55.590141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:57.019 15:44:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86108 00:17:57.279 [2024-11-25 15:44:55.869499] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:58.219 15:44:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:58.219 00:17:58.219 real 0m19.539s 00:17:58.219 user 0m25.451s 00:17:58.219 sys 0m2.679s 00:17:58.219 15:44:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.219 ************************************ 00:17:58.219 END TEST raid_rebuild_test_sb_4k 00:17:58.219 ************************************ 00:17:58.219 15:44:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.479 15:44:56 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:58.479 15:44:56 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:58.479 15:44:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:58.479 15:44:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.479 15:44:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.479 ************************************ 00:17:58.479 START TEST raid_state_function_test_sb_md_separate 00:17:58.479 ************************************ 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=86794 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86794' 00:17:58.479 Process raid pid: 86794 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 86794 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 86794 ']' 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.479 15:44:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.479 [2024-11-25 15:44:57.064694] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:17:58.479 [2024-11-25 15:44:57.064807] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.739 [2024-11-25 15:44:57.237645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.739 [2024-11-25 15:44:57.340688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.998 [2024-11-25 15:44:57.537449] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.998 [2024-11-25 15:44:57.537484] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.258 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.258 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:59.259 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:59.259 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.259 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.259 [2024-11-25 15:44:57.900888] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:59.259 [2024-11-25 15:44:57.901029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:59.259 [2024-11-25 15:44:57.901045] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:59.259 [2024-11-25 15:44:57.901054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:59.259 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.259 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:59.259 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:59.259 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:59.259 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.259 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.259 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:59.259 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.259 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.259 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.259 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.259 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.259 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.259 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.259 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.259 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.518 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.518 "name": "Existed_Raid", 00:17:59.518 "uuid": "82af7990-7df6-48c8-b524-856db965d45b", 00:17:59.518 "strip_size_kb": 0, 00:17:59.518 "state": "configuring", 00:17:59.518 "raid_level": "raid1", 00:17:59.518 "superblock": true, 00:17:59.518 "num_base_bdevs": 2, 00:17:59.518 "num_base_bdevs_discovered": 0, 00:17:59.518 "num_base_bdevs_operational": 2, 00:17:59.518 "base_bdevs_list": [ 00:17:59.518 { 00:17:59.518 "name": "BaseBdev1", 00:17:59.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.518 "is_configured": false, 00:17:59.518 "data_offset": 0, 00:17:59.518 "data_size": 0 00:17:59.518 }, 00:17:59.518 { 00:17:59.518 "name": "BaseBdev2", 00:17:59.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.518 "is_configured": false, 00:17:59.518 "data_offset": 0, 00:17:59.518 "data_size": 0 00:17:59.518 } 00:17:59.518 ] 00:17:59.518 }' 00:17:59.518 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.518 15:44:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.778 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:59.778 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.778 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.778 [2024-11-25 15:44:58.324099] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:59.778 [2024-11-25 15:44:58.324179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:59.778 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.778 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.779 [2024-11-25 15:44:58.336082] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:59.779 [2024-11-25 15:44:58.336159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:59.779 [2024-11-25 15:44:58.336184] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:59.779 [2024-11-25 15:44:58.336208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.779 [2024-11-25 15:44:58.378405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:59.779 BaseBdev1 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.779 [ 00:17:59.779 { 00:17:59.779 "name": "BaseBdev1", 00:17:59.779 "aliases": [ 00:17:59.779 "4a4b1daf-6d5c-4d94-98c3-0bb86388a5c9" 00:17:59.779 ], 00:17:59.779 "product_name": "Malloc disk", 00:17:59.779 "block_size": 4096, 00:17:59.779 "num_blocks": 8192, 00:17:59.779 "uuid": "4a4b1daf-6d5c-4d94-98c3-0bb86388a5c9", 00:17:59.779 "md_size": 32, 00:17:59.779 "md_interleave": false, 00:17:59.779 "dif_type": 0, 00:17:59.779 "assigned_rate_limits": { 00:17:59.779 "rw_ios_per_sec": 0, 00:17:59.779 "rw_mbytes_per_sec": 0, 00:17:59.779 "r_mbytes_per_sec": 0, 00:17:59.779 "w_mbytes_per_sec": 0 00:17:59.779 }, 00:17:59.779 "claimed": true, 00:17:59.779 "claim_type": "exclusive_write", 00:17:59.779 "zoned": false, 00:17:59.779 "supported_io_types": { 00:17:59.779 "read": true, 00:17:59.779 "write": true, 00:17:59.779 "unmap": true, 00:17:59.779 "flush": true, 00:17:59.779 "reset": true, 00:17:59.779 "nvme_admin": false, 00:17:59.779 "nvme_io": false, 00:17:59.779 "nvme_io_md": false, 00:17:59.779 "write_zeroes": true, 00:17:59.779 "zcopy": true, 00:17:59.779 "get_zone_info": false, 00:17:59.779 "zone_management": false, 00:17:59.779 "zone_append": false, 00:17:59.779 "compare": false, 00:17:59.779 "compare_and_write": false, 00:17:59.779 "abort": true, 00:17:59.779 "seek_hole": false, 00:17:59.779 "seek_data": false, 00:17:59.779 "copy": true, 00:17:59.779 "nvme_iov_md": false 00:17:59.779 }, 00:17:59.779 "memory_domains": [ 00:17:59.779 { 00:17:59.779 "dma_device_id": "system", 00:17:59.779 "dma_device_type": 1 00:17:59.779 }, 00:17:59.779 { 00:17:59.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.779 "dma_device_type": 2 00:17:59.779 } 00:17:59.779 ], 00:17:59.779 "driver_specific": {} 00:17:59.779 } 00:17:59.779 ] 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.779 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.039 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.039 "name": "Existed_Raid", 00:18:00.039 "uuid": "074dd514-b254-4c2b-b953-4267dda694df", 00:18:00.039 "strip_size_kb": 0, 00:18:00.039 "state": "configuring", 00:18:00.039 "raid_level": "raid1", 00:18:00.039 "superblock": true, 00:18:00.039 "num_base_bdevs": 2, 00:18:00.039 "num_base_bdevs_discovered": 1, 00:18:00.039 "num_base_bdevs_operational": 2, 00:18:00.039 "base_bdevs_list": [ 00:18:00.039 { 00:18:00.039 "name": "BaseBdev1", 00:18:00.039 "uuid": "4a4b1daf-6d5c-4d94-98c3-0bb86388a5c9", 00:18:00.039 "is_configured": true, 00:18:00.039 "data_offset": 256, 00:18:00.039 "data_size": 7936 00:18:00.039 }, 00:18:00.039 { 00:18:00.039 "name": "BaseBdev2", 00:18:00.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.039 "is_configured": false, 00:18:00.039 "data_offset": 0, 00:18:00.039 "data_size": 0 00:18:00.039 } 00:18:00.039 ] 00:18:00.039 }' 00:18:00.039 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.039 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.299 [2024-11-25 15:44:58.841670] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:00.299 [2024-11-25 15:44:58.841706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.299 [2024-11-25 15:44:58.853691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:00.299 [2024-11-25 15:44:58.855348] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:00.299 [2024-11-25 15:44:58.855391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.299 "name": "Existed_Raid", 00:18:00.299 "uuid": "d50978ee-8250-4345-aad4-4962054b778a", 00:18:00.299 "strip_size_kb": 0, 00:18:00.299 "state": "configuring", 00:18:00.299 "raid_level": "raid1", 00:18:00.299 "superblock": true, 00:18:00.299 "num_base_bdevs": 2, 00:18:00.299 "num_base_bdevs_discovered": 1, 00:18:00.299 "num_base_bdevs_operational": 2, 00:18:00.299 "base_bdevs_list": [ 00:18:00.299 { 00:18:00.299 "name": "BaseBdev1", 00:18:00.299 "uuid": "4a4b1daf-6d5c-4d94-98c3-0bb86388a5c9", 00:18:00.299 "is_configured": true, 00:18:00.299 "data_offset": 256, 00:18:00.299 "data_size": 7936 00:18:00.299 }, 00:18:00.299 { 00:18:00.299 "name": "BaseBdev2", 00:18:00.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.299 "is_configured": false, 00:18:00.299 "data_offset": 0, 00:18:00.299 "data_size": 0 00:18:00.299 } 00:18:00.299 ] 00:18:00.299 }' 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.299 15:44:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.868 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:00.868 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.868 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.868 [2024-11-25 15:44:59.290911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:00.868 [2024-11-25 15:44:59.291271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:00.868 [2024-11-25 15:44:59.291327] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:00.869 [2024-11-25 15:44:59.291437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:00.869 [2024-11-25 15:44:59.291612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:00.869 [2024-11-25 15:44:59.291656] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:18:00.869 id_bdev 0x617000007e80 00:18:00.869 [2024-11-25 15:44:59.291806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.869 [ 00:18:00.869 { 00:18:00.869 "name": "BaseBdev2", 00:18:00.869 "aliases": [ 00:18:00.869 "f31de14f-805c-4781-aa41-516cd195ea6f" 00:18:00.869 ], 00:18:00.869 "product_name": "Malloc disk", 00:18:00.869 "block_size": 4096, 00:18:00.869 "num_blocks": 8192, 00:18:00.869 "uuid": "f31de14f-805c-4781-aa41-516cd195ea6f", 00:18:00.869 "md_size": 32, 00:18:00.869 "md_interleave": false, 00:18:00.869 "dif_type": 0, 00:18:00.869 "assigned_rate_limits": { 00:18:00.869 "rw_ios_per_sec": 0, 00:18:00.869 "rw_mbytes_per_sec": 0, 00:18:00.869 "r_mbytes_per_sec": 0, 00:18:00.869 "w_mbytes_per_sec": 0 00:18:00.869 }, 00:18:00.869 "claimed": true, 00:18:00.869 "claim_type": "exclusive_write", 00:18:00.869 "zoned": false, 00:18:00.869 "supported_io_types": { 00:18:00.869 "read": true, 00:18:00.869 "write": true, 00:18:00.869 "unmap": true, 00:18:00.869 "flush": true, 00:18:00.869 "reset": true, 00:18:00.869 "nvme_admin": false, 00:18:00.869 "nvme_io": false, 00:18:00.869 "nvme_io_md": false, 00:18:00.869 "write_zeroes": true, 00:18:00.869 "zcopy": true, 00:18:00.869 "get_zone_info": false, 00:18:00.869 "zone_management": false, 00:18:00.869 "zone_append": false, 00:18:00.869 "compare": false, 00:18:00.869 "compare_and_write": false, 00:18:00.869 "abort": true, 00:18:00.869 "seek_hole": false, 00:18:00.869 "seek_data": false, 00:18:00.869 "copy": true, 00:18:00.869 "nvme_iov_md": false 00:18:00.869 }, 00:18:00.869 "memory_domains": [ 00:18:00.869 { 00:18:00.869 "dma_device_id": "system", 00:18:00.869 "dma_device_type": 1 00:18:00.869 }, 00:18:00.869 { 00:18:00.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.869 "dma_device_type": 2 00:18:00.869 } 00:18:00.869 ], 00:18:00.869 "driver_specific": {} 00:18:00.869 } 00:18:00.869 ] 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.869 "name": "Existed_Raid", 00:18:00.869 "uuid": "d50978ee-8250-4345-aad4-4962054b778a", 00:18:00.869 "strip_size_kb": 0, 00:18:00.869 "state": "online", 00:18:00.869 "raid_level": "raid1", 00:18:00.869 "superblock": true, 00:18:00.869 "num_base_bdevs": 2, 00:18:00.869 "num_base_bdevs_discovered": 2, 00:18:00.869 "num_base_bdevs_operational": 2, 00:18:00.869 "base_bdevs_list": [ 00:18:00.869 { 00:18:00.869 "name": "BaseBdev1", 00:18:00.869 "uuid": "4a4b1daf-6d5c-4d94-98c3-0bb86388a5c9", 00:18:00.869 "is_configured": true, 00:18:00.869 "data_offset": 256, 00:18:00.869 "data_size": 7936 00:18:00.869 }, 00:18:00.869 { 00:18:00.869 "name": "BaseBdev2", 00:18:00.869 "uuid": "f31de14f-805c-4781-aa41-516cd195ea6f", 00:18:00.869 "is_configured": true, 00:18:00.869 "data_offset": 256, 00:18:00.869 "data_size": 7936 00:18:00.869 } 00:18:00.869 ] 00:18:00.869 }' 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.869 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.129 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:01.129 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:01.129 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:01.129 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:01.129 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:01.129 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:01.129 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:01.129 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.129 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.129 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:01.129 [2024-11-25 15:44:59.782338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.129 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.389 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:01.389 "name": "Existed_Raid", 00:18:01.389 "aliases": [ 00:18:01.389 "d50978ee-8250-4345-aad4-4962054b778a" 00:18:01.389 ], 00:18:01.389 "product_name": "Raid Volume", 00:18:01.389 "block_size": 4096, 00:18:01.389 "num_blocks": 7936, 00:18:01.389 "uuid": "d50978ee-8250-4345-aad4-4962054b778a", 00:18:01.389 "md_size": 32, 00:18:01.389 "md_interleave": false, 00:18:01.389 "dif_type": 0, 00:18:01.389 "assigned_rate_limits": { 00:18:01.389 "rw_ios_per_sec": 0, 00:18:01.389 "rw_mbytes_per_sec": 0, 00:18:01.389 "r_mbytes_per_sec": 0, 00:18:01.389 "w_mbytes_per_sec": 0 00:18:01.389 }, 00:18:01.389 "claimed": false, 00:18:01.389 "zoned": false, 00:18:01.389 "supported_io_types": { 00:18:01.389 "read": true, 00:18:01.389 "write": true, 00:18:01.389 "unmap": false, 00:18:01.389 "flush": false, 00:18:01.389 "reset": true, 00:18:01.389 "nvme_admin": false, 00:18:01.389 "nvme_io": false, 00:18:01.389 "nvme_io_md": false, 00:18:01.389 "write_zeroes": true, 00:18:01.389 "zcopy": false, 00:18:01.389 "get_zone_info": false, 00:18:01.389 "zone_management": false, 00:18:01.389 "zone_append": false, 00:18:01.389 "compare": false, 00:18:01.389 "compare_and_write": false, 00:18:01.390 "abort": false, 00:18:01.390 "seek_hole": false, 00:18:01.390 "seek_data": false, 00:18:01.390 "copy": false, 00:18:01.390 "nvme_iov_md": false 00:18:01.390 }, 00:18:01.390 "memory_domains": [ 00:18:01.390 { 00:18:01.390 "dma_device_id": "system", 00:18:01.390 "dma_device_type": 1 00:18:01.390 }, 00:18:01.390 { 00:18:01.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.390 "dma_device_type": 2 00:18:01.390 }, 00:18:01.390 { 00:18:01.390 "dma_device_id": "system", 00:18:01.390 "dma_device_type": 1 00:18:01.390 }, 00:18:01.390 { 00:18:01.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.390 "dma_device_type": 2 00:18:01.390 } 00:18:01.390 ], 00:18:01.390 "driver_specific": { 00:18:01.390 "raid": { 00:18:01.390 "uuid": "d50978ee-8250-4345-aad4-4962054b778a", 00:18:01.390 "strip_size_kb": 0, 00:18:01.390 "state": "online", 00:18:01.390 "raid_level": "raid1", 00:18:01.390 "superblock": true, 00:18:01.390 "num_base_bdevs": 2, 00:18:01.390 "num_base_bdevs_discovered": 2, 00:18:01.390 "num_base_bdevs_operational": 2, 00:18:01.390 "base_bdevs_list": [ 00:18:01.390 { 00:18:01.390 "name": "BaseBdev1", 00:18:01.390 "uuid": "4a4b1daf-6d5c-4d94-98c3-0bb86388a5c9", 00:18:01.390 "is_configured": true, 00:18:01.390 "data_offset": 256, 00:18:01.390 "data_size": 7936 00:18:01.390 }, 00:18:01.390 { 00:18:01.390 "name": "BaseBdev2", 00:18:01.390 "uuid": "f31de14f-805c-4781-aa41-516cd195ea6f", 00:18:01.390 "is_configured": true, 00:18:01.390 "data_offset": 256, 00:18:01.390 "data_size": 7936 00:18:01.390 } 00:18:01.390 ] 00:18:01.390 } 00:18:01.390 } 00:18:01.390 }' 00:18:01.390 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:01.390 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:01.390 BaseBdev2' 00:18:01.390 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.390 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:01.390 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.390 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:01.390 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.390 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.390 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.390 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.390 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:01.390 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:01.390 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.390 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:01.390 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.390 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.390 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.390 15:44:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.390 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:01.390 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:01.390 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:01.390 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.390 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.390 [2024-11-25 15:45:00.013749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.651 "name": "Existed_Raid", 00:18:01.651 "uuid": "d50978ee-8250-4345-aad4-4962054b778a", 00:18:01.651 "strip_size_kb": 0, 00:18:01.651 "state": "online", 00:18:01.651 "raid_level": "raid1", 00:18:01.651 "superblock": true, 00:18:01.651 "num_base_bdevs": 2, 00:18:01.651 "num_base_bdevs_discovered": 1, 00:18:01.651 "num_base_bdevs_operational": 1, 00:18:01.651 "base_bdevs_list": [ 00:18:01.651 { 00:18:01.651 "name": null, 00:18:01.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.651 "is_configured": false, 00:18:01.651 "data_offset": 0, 00:18:01.651 "data_size": 7936 00:18:01.651 }, 00:18:01.651 { 00:18:01.651 "name": "BaseBdev2", 00:18:01.651 "uuid": "f31de14f-805c-4781-aa41-516cd195ea6f", 00:18:01.651 "is_configured": true, 00:18:01.651 "data_offset": 256, 00:18:01.651 "data_size": 7936 00:18:01.651 } 00:18:01.651 ] 00:18:01.651 }' 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.651 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.912 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:01.912 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:01.912 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:01.912 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.912 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.912 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.173 [2024-11-25 15:45:00.628630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:02.173 [2024-11-25 15:45:00.628781] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.173 [2024-11-25 15:45:00.723688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.173 [2024-11-25 15:45:00.723816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.173 [2024-11-25 15:45:00.723857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 86794 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 86794 ']' 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 86794 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86794 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:02.173 killing process with pid 86794 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86794' 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 86794 00:18:02.173 [2024-11-25 15:45:00.808382] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:02.173 15:45:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 86794 00:18:02.173 [2024-11-25 15:45:00.823298] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:03.555 15:45:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:03.555 00:18:03.555 real 0m4.885s 00:18:03.555 user 0m7.064s 00:18:03.555 sys 0m0.856s 00:18:03.555 15:45:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:03.555 15:45:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.555 ************************************ 00:18:03.555 END TEST raid_state_function_test_sb_md_separate 00:18:03.555 ************************************ 00:18:03.555 15:45:01 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:03.555 15:45:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:03.555 15:45:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:03.555 15:45:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:03.555 ************************************ 00:18:03.555 START TEST raid_superblock_test_md_separate 00:18:03.555 ************************************ 00:18:03.555 15:45:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:03.555 15:45:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:03.555 15:45:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:03.555 15:45:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:03.555 15:45:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87041 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87041 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87041 ']' 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.556 15:45:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.556 [2024-11-25 15:45:02.018793] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:18:03.556 [2024-11-25 15:45:02.018979] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87041 ] 00:18:03.556 [2024-11-25 15:45:02.194484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.816 [2024-11-25 15:45:02.302114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.816 [2024-11-25 15:45:02.483242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:03.816 [2024-11-25 15:45:02.483368] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.387 malloc1 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.387 [2024-11-25 15:45:02.879458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:04.387 [2024-11-25 15:45:02.879520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.387 [2024-11-25 15:45:02.879549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:04.387 [2024-11-25 15:45:02.879558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.387 [2024-11-25 15:45:02.881381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.387 [2024-11-25 15:45:02.881418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:04.387 pt1 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.387 malloc2 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.387 [2024-11-25 15:45:02.928771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:04.387 [2024-11-25 15:45:02.928913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.387 [2024-11-25 15:45:02.928950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:04.387 [2024-11-25 15:45:02.928977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.387 [2024-11-25 15:45:02.930774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.387 [2024-11-25 15:45:02.930843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:04.387 pt2 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.387 [2024-11-25 15:45:02.940793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:04.387 [2024-11-25 15:45:02.942571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:04.387 [2024-11-25 15:45:02.942808] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:04.387 [2024-11-25 15:45:02.942854] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:04.387 [2024-11-25 15:45:02.942946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:04.387 [2024-11-25 15:45:02.943140] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:04.387 [2024-11-25 15:45:02.943186] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:04.387 [2024-11-25 15:45:02.943321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.387 15:45:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.387 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.387 "name": "raid_bdev1", 00:18:04.387 "uuid": "a49305e3-19af-43cb-88c6-529cd2ac0266", 00:18:04.387 "strip_size_kb": 0, 00:18:04.387 "state": "online", 00:18:04.387 "raid_level": "raid1", 00:18:04.387 "superblock": true, 00:18:04.387 "num_base_bdevs": 2, 00:18:04.387 "num_base_bdevs_discovered": 2, 00:18:04.387 "num_base_bdevs_operational": 2, 00:18:04.387 "base_bdevs_list": [ 00:18:04.387 { 00:18:04.387 "name": "pt1", 00:18:04.387 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:04.387 "is_configured": true, 00:18:04.387 "data_offset": 256, 00:18:04.387 "data_size": 7936 00:18:04.387 }, 00:18:04.387 { 00:18:04.387 "name": "pt2", 00:18:04.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.387 "is_configured": true, 00:18:04.387 "data_offset": 256, 00:18:04.387 "data_size": 7936 00:18:04.387 } 00:18:04.387 ] 00:18:04.387 }' 00:18:04.387 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.387 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.956 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:04.956 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:04.956 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:04.956 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:04.956 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:04.956 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:04.956 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:04.957 [2024-11-25 15:45:03.420222] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:04.957 "name": "raid_bdev1", 00:18:04.957 "aliases": [ 00:18:04.957 "a49305e3-19af-43cb-88c6-529cd2ac0266" 00:18:04.957 ], 00:18:04.957 "product_name": "Raid Volume", 00:18:04.957 "block_size": 4096, 00:18:04.957 "num_blocks": 7936, 00:18:04.957 "uuid": "a49305e3-19af-43cb-88c6-529cd2ac0266", 00:18:04.957 "md_size": 32, 00:18:04.957 "md_interleave": false, 00:18:04.957 "dif_type": 0, 00:18:04.957 "assigned_rate_limits": { 00:18:04.957 "rw_ios_per_sec": 0, 00:18:04.957 "rw_mbytes_per_sec": 0, 00:18:04.957 "r_mbytes_per_sec": 0, 00:18:04.957 "w_mbytes_per_sec": 0 00:18:04.957 }, 00:18:04.957 "claimed": false, 00:18:04.957 "zoned": false, 00:18:04.957 "supported_io_types": { 00:18:04.957 "read": true, 00:18:04.957 "write": true, 00:18:04.957 "unmap": false, 00:18:04.957 "flush": false, 00:18:04.957 "reset": true, 00:18:04.957 "nvme_admin": false, 00:18:04.957 "nvme_io": false, 00:18:04.957 "nvme_io_md": false, 00:18:04.957 "write_zeroes": true, 00:18:04.957 "zcopy": false, 00:18:04.957 "get_zone_info": false, 00:18:04.957 "zone_management": false, 00:18:04.957 "zone_append": false, 00:18:04.957 "compare": false, 00:18:04.957 "compare_and_write": false, 00:18:04.957 "abort": false, 00:18:04.957 "seek_hole": false, 00:18:04.957 "seek_data": false, 00:18:04.957 "copy": false, 00:18:04.957 "nvme_iov_md": false 00:18:04.957 }, 00:18:04.957 "memory_domains": [ 00:18:04.957 { 00:18:04.957 "dma_device_id": "system", 00:18:04.957 "dma_device_type": 1 00:18:04.957 }, 00:18:04.957 { 00:18:04.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.957 "dma_device_type": 2 00:18:04.957 }, 00:18:04.957 { 00:18:04.957 "dma_device_id": "system", 00:18:04.957 "dma_device_type": 1 00:18:04.957 }, 00:18:04.957 { 00:18:04.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.957 "dma_device_type": 2 00:18:04.957 } 00:18:04.957 ], 00:18:04.957 "driver_specific": { 00:18:04.957 "raid": { 00:18:04.957 "uuid": "a49305e3-19af-43cb-88c6-529cd2ac0266", 00:18:04.957 "strip_size_kb": 0, 00:18:04.957 "state": "online", 00:18:04.957 "raid_level": "raid1", 00:18:04.957 "superblock": true, 00:18:04.957 "num_base_bdevs": 2, 00:18:04.957 "num_base_bdevs_discovered": 2, 00:18:04.957 "num_base_bdevs_operational": 2, 00:18:04.957 "base_bdevs_list": [ 00:18:04.957 { 00:18:04.957 "name": "pt1", 00:18:04.957 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:04.957 "is_configured": true, 00:18:04.957 "data_offset": 256, 00:18:04.957 "data_size": 7936 00:18:04.957 }, 00:18:04.957 { 00:18:04.957 "name": "pt2", 00:18:04.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.957 "is_configured": true, 00:18:04.957 "data_offset": 256, 00:18:04.957 "data_size": 7936 00:18:04.957 } 00:18:04.957 ] 00:18:04.957 } 00:18:04.957 } 00:18:04.957 }' 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:04.957 pt2' 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.957 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.957 [2024-11-25 15:45:03.627848] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a49305e3-19af-43cb-88c6-529cd2ac0266 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z a49305e3-19af-43cb-88c6-529cd2ac0266 ']' 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.218 [2024-11-25 15:45:03.675606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:05.218 [2024-11-25 15:45:03.675673] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:05.218 [2024-11-25 15:45:03.675757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.218 [2024-11-25 15:45:03.675834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:05.218 [2024-11-25 15:45:03.675869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.218 [2024-11-25 15:45:03.811375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:05.218 [2024-11-25 15:45:03.813122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:05.218 [2024-11-25 15:45:03.813188] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:05.218 [2024-11-25 15:45:03.813233] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:05.218 [2024-11-25 15:45:03.813247] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:05.218 [2024-11-25 15:45:03.813255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:05.218 request: 00:18:05.218 { 00:18:05.218 "name": "raid_bdev1", 00:18:05.218 "raid_level": "raid1", 00:18:05.218 "base_bdevs": [ 00:18:05.218 "malloc1", 00:18:05.218 "malloc2" 00:18:05.218 ], 00:18:05.218 "superblock": false, 00:18:05.218 "method": "bdev_raid_create", 00:18:05.218 "req_id": 1 00:18:05.218 } 00:18:05.218 Got JSON-RPC error response 00:18:05.218 response: 00:18:05.218 { 00:18:05.218 "code": -17, 00:18:05.218 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:05.218 } 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.218 [2024-11-25 15:45:03.871266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:05.218 [2024-11-25 15:45:03.871357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.218 [2024-11-25 15:45:03.871386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:05.218 [2024-11-25 15:45:03.871414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.218 [2024-11-25 15:45:03.873269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.218 [2024-11-25 15:45:03.873355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:05.218 [2024-11-25 15:45:03.873413] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:05.218 [2024-11-25 15:45:03.873474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:05.218 pt1 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:05.218 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.219 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.219 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:05.219 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.219 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.219 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.219 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.219 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.219 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.219 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.219 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.478 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.478 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.478 "name": "raid_bdev1", 00:18:05.478 "uuid": "a49305e3-19af-43cb-88c6-529cd2ac0266", 00:18:05.478 "strip_size_kb": 0, 00:18:05.478 "state": "configuring", 00:18:05.478 "raid_level": "raid1", 00:18:05.478 "superblock": true, 00:18:05.478 "num_base_bdevs": 2, 00:18:05.478 "num_base_bdevs_discovered": 1, 00:18:05.478 "num_base_bdevs_operational": 2, 00:18:05.478 "base_bdevs_list": [ 00:18:05.478 { 00:18:05.478 "name": "pt1", 00:18:05.478 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:05.478 "is_configured": true, 00:18:05.478 "data_offset": 256, 00:18:05.478 "data_size": 7936 00:18:05.478 }, 00:18:05.478 { 00:18:05.478 "name": null, 00:18:05.478 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.478 "is_configured": false, 00:18:05.478 "data_offset": 256, 00:18:05.478 "data_size": 7936 00:18:05.478 } 00:18:05.478 ] 00:18:05.478 }' 00:18:05.478 15:45:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.478 15:45:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.738 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:05.738 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:05.738 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:05.738 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:05.738 15:45:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.738 15:45:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.738 [2024-11-25 15:45:04.302501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:05.738 [2024-11-25 15:45:04.302556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.738 [2024-11-25 15:45:04.302572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:05.738 [2024-11-25 15:45:04.302582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.738 [2024-11-25 15:45:04.302734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.738 [2024-11-25 15:45:04.302750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:05.738 [2024-11-25 15:45:04.302782] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:05.738 [2024-11-25 15:45:04.302799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:05.738 [2024-11-25 15:45:04.302884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:05.738 [2024-11-25 15:45:04.302894] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:05.738 [2024-11-25 15:45:04.302950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:05.739 [2024-11-25 15:45:04.303072] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:05.739 [2024-11-25 15:45:04.303080] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:05.739 [2024-11-25 15:45:04.303155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.739 pt2 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.739 "name": "raid_bdev1", 00:18:05.739 "uuid": "a49305e3-19af-43cb-88c6-529cd2ac0266", 00:18:05.739 "strip_size_kb": 0, 00:18:05.739 "state": "online", 00:18:05.739 "raid_level": "raid1", 00:18:05.739 "superblock": true, 00:18:05.739 "num_base_bdevs": 2, 00:18:05.739 "num_base_bdevs_discovered": 2, 00:18:05.739 "num_base_bdevs_operational": 2, 00:18:05.739 "base_bdevs_list": [ 00:18:05.739 { 00:18:05.739 "name": "pt1", 00:18:05.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:05.739 "is_configured": true, 00:18:05.739 "data_offset": 256, 00:18:05.739 "data_size": 7936 00:18:05.739 }, 00:18:05.739 { 00:18:05.739 "name": "pt2", 00:18:05.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.739 "is_configured": true, 00:18:05.739 "data_offset": 256, 00:18:05.739 "data_size": 7936 00:18:05.739 } 00:18:05.739 ] 00:18:05.739 }' 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.739 15:45:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.309 [2024-11-25 15:45:04.761952] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:06.309 "name": "raid_bdev1", 00:18:06.309 "aliases": [ 00:18:06.309 "a49305e3-19af-43cb-88c6-529cd2ac0266" 00:18:06.309 ], 00:18:06.309 "product_name": "Raid Volume", 00:18:06.309 "block_size": 4096, 00:18:06.309 "num_blocks": 7936, 00:18:06.309 "uuid": "a49305e3-19af-43cb-88c6-529cd2ac0266", 00:18:06.309 "md_size": 32, 00:18:06.309 "md_interleave": false, 00:18:06.309 "dif_type": 0, 00:18:06.309 "assigned_rate_limits": { 00:18:06.309 "rw_ios_per_sec": 0, 00:18:06.309 "rw_mbytes_per_sec": 0, 00:18:06.309 "r_mbytes_per_sec": 0, 00:18:06.309 "w_mbytes_per_sec": 0 00:18:06.309 }, 00:18:06.309 "claimed": false, 00:18:06.309 "zoned": false, 00:18:06.309 "supported_io_types": { 00:18:06.309 "read": true, 00:18:06.309 "write": true, 00:18:06.309 "unmap": false, 00:18:06.309 "flush": false, 00:18:06.309 "reset": true, 00:18:06.309 "nvme_admin": false, 00:18:06.309 "nvme_io": false, 00:18:06.309 "nvme_io_md": false, 00:18:06.309 "write_zeroes": true, 00:18:06.309 "zcopy": false, 00:18:06.309 "get_zone_info": false, 00:18:06.309 "zone_management": false, 00:18:06.309 "zone_append": false, 00:18:06.309 "compare": false, 00:18:06.309 "compare_and_write": false, 00:18:06.309 "abort": false, 00:18:06.309 "seek_hole": false, 00:18:06.309 "seek_data": false, 00:18:06.309 "copy": false, 00:18:06.309 "nvme_iov_md": false 00:18:06.309 }, 00:18:06.309 "memory_domains": [ 00:18:06.309 { 00:18:06.309 "dma_device_id": "system", 00:18:06.309 "dma_device_type": 1 00:18:06.309 }, 00:18:06.309 { 00:18:06.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.309 "dma_device_type": 2 00:18:06.309 }, 00:18:06.309 { 00:18:06.309 "dma_device_id": "system", 00:18:06.309 "dma_device_type": 1 00:18:06.309 }, 00:18:06.309 { 00:18:06.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.309 "dma_device_type": 2 00:18:06.309 } 00:18:06.309 ], 00:18:06.309 "driver_specific": { 00:18:06.309 "raid": { 00:18:06.309 "uuid": "a49305e3-19af-43cb-88c6-529cd2ac0266", 00:18:06.309 "strip_size_kb": 0, 00:18:06.309 "state": "online", 00:18:06.309 "raid_level": "raid1", 00:18:06.309 "superblock": true, 00:18:06.309 "num_base_bdevs": 2, 00:18:06.309 "num_base_bdevs_discovered": 2, 00:18:06.309 "num_base_bdevs_operational": 2, 00:18:06.309 "base_bdevs_list": [ 00:18:06.309 { 00:18:06.309 "name": "pt1", 00:18:06.309 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:06.309 "is_configured": true, 00:18:06.309 "data_offset": 256, 00:18:06.309 "data_size": 7936 00:18:06.309 }, 00:18:06.309 { 00:18:06.309 "name": "pt2", 00:18:06.309 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.309 "is_configured": true, 00:18:06.309 "data_offset": 256, 00:18:06.309 "data_size": 7936 00:18:06.309 } 00:18:06.309 ] 00:18:06.309 } 00:18:06.309 } 00:18:06.309 }' 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:06.309 pt2' 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.309 [2024-11-25 15:45:04.965609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.309 15:45:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' a49305e3-19af-43cb-88c6-529cd2ac0266 '!=' a49305e3-19af-43cb-88c6-529cd2ac0266 ']' 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.570 [2024-11-25 15:45:05.009340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.570 "name": "raid_bdev1", 00:18:06.570 "uuid": "a49305e3-19af-43cb-88c6-529cd2ac0266", 00:18:06.570 "strip_size_kb": 0, 00:18:06.570 "state": "online", 00:18:06.570 "raid_level": "raid1", 00:18:06.570 "superblock": true, 00:18:06.570 "num_base_bdevs": 2, 00:18:06.570 "num_base_bdevs_discovered": 1, 00:18:06.570 "num_base_bdevs_operational": 1, 00:18:06.570 "base_bdevs_list": [ 00:18:06.570 { 00:18:06.570 "name": null, 00:18:06.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.570 "is_configured": false, 00:18:06.570 "data_offset": 0, 00:18:06.570 "data_size": 7936 00:18:06.570 }, 00:18:06.570 { 00:18:06.570 "name": "pt2", 00:18:06.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.570 "is_configured": true, 00:18:06.570 "data_offset": 256, 00:18:06.570 "data_size": 7936 00:18:06.570 } 00:18:06.570 ] 00:18:06.570 }' 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.570 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.830 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:06.830 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.830 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.830 [2024-11-25 15:45:05.492503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:06.830 [2024-11-25 15:45:05.492572] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:06.830 [2024-11-25 15:45:05.492653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.830 [2024-11-25 15:45:05.492702] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.830 [2024-11-25 15:45:05.492734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:06.830 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.830 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.830 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.830 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:06.830 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.090 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.090 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:07.090 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:07.090 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:07.090 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:07.090 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:07.090 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.090 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.090 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.090 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:07.090 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:07.090 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:07.090 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:07.090 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:07.090 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:07.090 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.090 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.090 [2024-11-25 15:45:05.568386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:07.090 [2024-11-25 15:45:05.568436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.090 [2024-11-25 15:45:05.568451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:07.090 [2024-11-25 15:45:05.568460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.091 [2024-11-25 15:45:05.570353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.091 [2024-11-25 15:45:05.570442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:07.091 [2024-11-25 15:45:05.570485] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:07.091 [2024-11-25 15:45:05.570524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:07.091 [2024-11-25 15:45:05.570610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:07.091 [2024-11-25 15:45:05.570622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:07.091 [2024-11-25 15:45:05.570683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:07.091 [2024-11-25 15:45:05.570782] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:07.091 [2024-11-25 15:45:05.570789] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:07.091 [2024-11-25 15:45:05.570880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.091 pt2 00:18:07.091 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.091 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:07.091 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.091 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.091 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.091 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.091 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:07.091 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.091 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.091 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.091 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.091 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.091 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.091 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.091 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.091 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.091 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.091 "name": "raid_bdev1", 00:18:07.091 "uuid": "a49305e3-19af-43cb-88c6-529cd2ac0266", 00:18:07.091 "strip_size_kb": 0, 00:18:07.091 "state": "online", 00:18:07.091 "raid_level": "raid1", 00:18:07.091 "superblock": true, 00:18:07.091 "num_base_bdevs": 2, 00:18:07.091 "num_base_bdevs_discovered": 1, 00:18:07.091 "num_base_bdevs_operational": 1, 00:18:07.091 "base_bdevs_list": [ 00:18:07.091 { 00:18:07.091 "name": null, 00:18:07.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.091 "is_configured": false, 00:18:07.091 "data_offset": 256, 00:18:07.091 "data_size": 7936 00:18:07.091 }, 00:18:07.091 { 00:18:07.091 "name": "pt2", 00:18:07.091 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.091 "is_configured": true, 00:18:07.091 "data_offset": 256, 00:18:07.091 "data_size": 7936 00:18:07.091 } 00:18:07.091 ] 00:18:07.091 }' 00:18:07.091 15:45:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.091 15:45:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.351 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:07.351 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.351 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.351 [2024-11-25 15:45:06.007609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.351 [2024-11-25 15:45:06.007680] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.351 [2024-11-25 15:45:06.007741] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.351 [2024-11-25 15:45:06.007791] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.351 [2024-11-25 15:45:06.007858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:07.351 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.351 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.351 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.351 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:07.351 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.351 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.611 [2024-11-25 15:45:06.071553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:07.611 [2024-11-25 15:45:06.071653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.611 [2024-11-25 15:45:06.071684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:07.611 [2024-11-25 15:45:06.071709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.611 [2024-11-25 15:45:06.073559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.611 [2024-11-25 15:45:06.073626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:07.611 [2024-11-25 15:45:06.073685] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:07.611 [2024-11-25 15:45:06.073755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:07.611 [2024-11-25 15:45:06.073872] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:07.611 [2024-11-25 15:45:06.073920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.611 [2024-11-25 15:45:06.073952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:07.611 [2024-11-25 15:45:06.074077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:07.611 [2024-11-25 15:45:06.074169] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:07.611 [2024-11-25 15:45:06.074209] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:07.611 [2024-11-25 15:45:06.074295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:07.611 [2024-11-25 15:45:06.074429] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:07.611 [2024-11-25 15:45:06.074467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:07.611 [2024-11-25 15:45:06.074594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.611 pt1 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.611 "name": "raid_bdev1", 00:18:07.611 "uuid": "a49305e3-19af-43cb-88c6-529cd2ac0266", 00:18:07.611 "strip_size_kb": 0, 00:18:07.611 "state": "online", 00:18:07.611 "raid_level": "raid1", 00:18:07.611 "superblock": true, 00:18:07.611 "num_base_bdevs": 2, 00:18:07.611 "num_base_bdevs_discovered": 1, 00:18:07.611 "num_base_bdevs_operational": 1, 00:18:07.611 "base_bdevs_list": [ 00:18:07.611 { 00:18:07.611 "name": null, 00:18:07.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.611 "is_configured": false, 00:18:07.611 "data_offset": 256, 00:18:07.611 "data_size": 7936 00:18:07.611 }, 00:18:07.611 { 00:18:07.611 "name": "pt2", 00:18:07.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.611 "is_configured": true, 00:18:07.611 "data_offset": 256, 00:18:07.611 "data_size": 7936 00:18:07.611 } 00:18:07.611 ] 00:18:07.611 }' 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.611 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.181 [2024-11-25 15:45:06.614803] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' a49305e3-19af-43cb-88c6-529cd2ac0266 '!=' a49305e3-19af-43cb-88c6-529cd2ac0266 ']' 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87041 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87041 ']' 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87041 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87041 00:18:08.181 killing process with pid 87041 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87041' 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87041 00:18:08.181 [2024-11-25 15:45:06.695428] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:08.181 [2024-11-25 15:45:06.695481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.181 [2024-11-25 15:45:06.695511] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.181 [2024-11-25 15:45:06.695526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:08.181 15:45:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87041 00:18:08.441 [2024-11-25 15:45:06.905673] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:09.382 ************************************ 00:18:09.382 END TEST raid_superblock_test_md_separate 00:18:09.382 ************************************ 00:18:09.382 15:45:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:09.382 00:18:09.382 real 0m6.004s 00:18:09.382 user 0m9.117s 00:18:09.382 sys 0m1.129s 00:18:09.382 15:45:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:09.382 15:45:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.382 15:45:07 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:09.382 15:45:07 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:09.382 15:45:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:09.382 15:45:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:09.382 15:45:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:09.382 ************************************ 00:18:09.382 START TEST raid_rebuild_test_sb_md_separate 00:18:09.382 ************************************ 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:09.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87369 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87369 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87369 ']' 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:09.382 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.383 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.383 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.383 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.383 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.643 [2024-11-25 15:45:08.109907] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:18:09.643 [2024-11-25 15:45:08.110116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:09.643 Zero copy mechanism will not be used. 00:18:09.643 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87369 ] 00:18:09.643 [2024-11-25 15:45:08.282730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.903 [2024-11-25 15:45:08.389062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.903 [2024-11-25 15:45:08.582946] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.903 [2024-11-25 15:45:08.583048] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.474 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.474 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:10.474 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:10.474 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:10.474 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.474 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.474 BaseBdev1_malloc 00:18:10.474 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.474 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:10.474 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.474 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.474 [2024-11-25 15:45:08.968574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:10.474 [2024-11-25 15:45:08.968727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.474 [2024-11-25 15:45:08.968770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:10.474 [2024-11-25 15:45:08.968826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.474 [2024-11-25 15:45:08.970787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.474 [2024-11-25 15:45:08.970885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:10.474 BaseBdev1 00:18:10.474 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.474 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:10.474 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:10.474 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.474 15:45:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.474 BaseBdev2_malloc 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.474 [2024-11-25 15:45:09.022961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:10.474 [2024-11-25 15:45:09.023033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.474 [2024-11-25 15:45:09.023069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:10.474 [2024-11-25 15:45:09.023079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.474 [2024-11-25 15:45:09.024880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.474 [2024-11-25 15:45:09.024973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:10.474 BaseBdev2 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.474 spare_malloc 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.474 spare_delay 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.474 [2024-11-25 15:45:09.123604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:10.474 [2024-11-25 15:45:09.123660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.474 [2024-11-25 15:45:09.123679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:10.474 [2024-11-25 15:45:09.123689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.474 [2024-11-25 15:45:09.125527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.474 [2024-11-25 15:45:09.125569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:10.474 spare 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.474 [2024-11-25 15:45:09.135619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:10.474 [2024-11-25 15:45:09.137346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.474 [2024-11-25 15:45:09.137510] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:10.474 [2024-11-25 15:45:09.137526] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:10.474 [2024-11-25 15:45:09.137593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:10.474 [2024-11-25 15:45:09.137713] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:10.474 [2024-11-25 15:45:09.137722] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:10.474 [2024-11-25 15:45:09.137840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.474 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.734 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.734 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.734 "name": "raid_bdev1", 00:18:10.734 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:10.734 "strip_size_kb": 0, 00:18:10.734 "state": "online", 00:18:10.734 "raid_level": "raid1", 00:18:10.734 "superblock": true, 00:18:10.734 "num_base_bdevs": 2, 00:18:10.734 "num_base_bdevs_discovered": 2, 00:18:10.734 "num_base_bdevs_operational": 2, 00:18:10.734 "base_bdevs_list": [ 00:18:10.734 { 00:18:10.734 "name": "BaseBdev1", 00:18:10.734 "uuid": "19cde605-c102-563e-b397-be55099675ed", 00:18:10.734 "is_configured": true, 00:18:10.734 "data_offset": 256, 00:18:10.734 "data_size": 7936 00:18:10.734 }, 00:18:10.734 { 00:18:10.734 "name": "BaseBdev2", 00:18:10.734 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:10.734 "is_configured": true, 00:18:10.734 "data_offset": 256, 00:18:10.734 "data_size": 7936 00:18:10.734 } 00:18:10.734 ] 00:18:10.734 }' 00:18:10.734 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.734 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.993 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:10.993 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:10.993 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.993 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.993 [2024-11-25 15:45:09.638953] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.993 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:11.255 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:11.255 [2024-11-25 15:45:09.914310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:11.255 /dev/nbd0 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:11.528 1+0 records in 00:18:11.528 1+0 records out 00:18:11.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577843 s, 7.1 MB/s 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:11.528 15:45:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:12.124 7936+0 records in 00:18:12.124 7936+0 records out 00:18:12.124 32505856 bytes (33 MB, 31 MiB) copied, 0.571092 s, 56.9 MB/s 00:18:12.124 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:12.124 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:12.124 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:12.124 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:12.124 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:12.124 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:12.124 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:12.124 [2024-11-25 15:45:10.766022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.124 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:12.124 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:12.124 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:12.124 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:12.124 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:12.124 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:12.124 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:12.124 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:12.124 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:12.124 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.124 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.124 [2024-11-25 15:45:10.798058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:12.385 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.385 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:12.385 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.385 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.385 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.385 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.385 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:12.385 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.385 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.385 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.385 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.385 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.385 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.385 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.385 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.385 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.385 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.385 "name": "raid_bdev1", 00:18:12.385 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:12.385 "strip_size_kb": 0, 00:18:12.385 "state": "online", 00:18:12.385 "raid_level": "raid1", 00:18:12.385 "superblock": true, 00:18:12.385 "num_base_bdevs": 2, 00:18:12.385 "num_base_bdevs_discovered": 1, 00:18:12.385 "num_base_bdevs_operational": 1, 00:18:12.385 "base_bdevs_list": [ 00:18:12.385 { 00:18:12.385 "name": null, 00:18:12.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.385 "is_configured": false, 00:18:12.385 "data_offset": 0, 00:18:12.385 "data_size": 7936 00:18:12.385 }, 00:18:12.385 { 00:18:12.385 "name": "BaseBdev2", 00:18:12.385 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:12.385 "is_configured": true, 00:18:12.385 "data_offset": 256, 00:18:12.385 "data_size": 7936 00:18:12.385 } 00:18:12.385 ] 00:18:12.385 }' 00:18:12.385 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.385 15:45:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.645 15:45:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:12.645 15:45:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.645 15:45:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.645 [2024-11-25 15:45:11.249243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:12.645 [2024-11-25 15:45:11.264191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:12.645 15:45:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.645 15:45:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:12.645 [2024-11-25 15:45:11.265975] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.027 "name": "raid_bdev1", 00:18:14.027 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:14.027 "strip_size_kb": 0, 00:18:14.027 "state": "online", 00:18:14.027 "raid_level": "raid1", 00:18:14.027 "superblock": true, 00:18:14.027 "num_base_bdevs": 2, 00:18:14.027 "num_base_bdevs_discovered": 2, 00:18:14.027 "num_base_bdevs_operational": 2, 00:18:14.027 "process": { 00:18:14.027 "type": "rebuild", 00:18:14.027 "target": "spare", 00:18:14.027 "progress": { 00:18:14.027 "blocks": 2560, 00:18:14.027 "percent": 32 00:18:14.027 } 00:18:14.027 }, 00:18:14.027 "base_bdevs_list": [ 00:18:14.027 { 00:18:14.027 "name": "spare", 00:18:14.027 "uuid": "b10c6ca0-86f7-5e6a-9d73-5db15be00e31", 00:18:14.027 "is_configured": true, 00:18:14.027 "data_offset": 256, 00:18:14.027 "data_size": 7936 00:18:14.027 }, 00:18:14.027 { 00:18:14.027 "name": "BaseBdev2", 00:18:14.027 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:14.027 "is_configured": true, 00:18:14.027 "data_offset": 256, 00:18:14.027 "data_size": 7936 00:18:14.027 } 00:18:14.027 ] 00:18:14.027 }' 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.027 [2024-11-25 15:45:12.425687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:14.027 [2024-11-25 15:45:12.470651] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:14.027 [2024-11-25 15:45:12.470708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.027 [2024-11-25 15:45:12.470722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:14.027 [2024-11-25 15:45:12.470731] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.027 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.027 "name": "raid_bdev1", 00:18:14.027 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:14.027 "strip_size_kb": 0, 00:18:14.027 "state": "online", 00:18:14.027 "raid_level": "raid1", 00:18:14.027 "superblock": true, 00:18:14.027 "num_base_bdevs": 2, 00:18:14.027 "num_base_bdevs_discovered": 1, 00:18:14.027 "num_base_bdevs_operational": 1, 00:18:14.027 "base_bdevs_list": [ 00:18:14.027 { 00:18:14.027 "name": null, 00:18:14.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.027 "is_configured": false, 00:18:14.027 "data_offset": 0, 00:18:14.027 "data_size": 7936 00:18:14.028 }, 00:18:14.028 { 00:18:14.028 "name": "BaseBdev2", 00:18:14.028 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:14.028 "is_configured": true, 00:18:14.028 "data_offset": 256, 00:18:14.028 "data_size": 7936 00:18:14.028 } 00:18:14.028 ] 00:18:14.028 }' 00:18:14.028 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.028 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.288 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:14.288 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.288 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:14.288 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:14.288 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.548 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.548 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.548 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.548 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.548 15:45:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.548 15:45:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.548 "name": "raid_bdev1", 00:18:14.548 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:14.548 "strip_size_kb": 0, 00:18:14.548 "state": "online", 00:18:14.548 "raid_level": "raid1", 00:18:14.548 "superblock": true, 00:18:14.548 "num_base_bdevs": 2, 00:18:14.548 "num_base_bdevs_discovered": 1, 00:18:14.548 "num_base_bdevs_operational": 1, 00:18:14.548 "base_bdevs_list": [ 00:18:14.548 { 00:18:14.548 "name": null, 00:18:14.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.548 "is_configured": false, 00:18:14.548 "data_offset": 0, 00:18:14.548 "data_size": 7936 00:18:14.548 }, 00:18:14.548 { 00:18:14.548 "name": "BaseBdev2", 00:18:14.548 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:14.548 "is_configured": true, 00:18:14.548 "data_offset": 256, 00:18:14.548 "data_size": 7936 00:18:14.548 } 00:18:14.548 ] 00:18:14.548 }' 00:18:14.548 15:45:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.548 15:45:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:14.548 15:45:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.548 15:45:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:14.548 15:45:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:14.548 15:45:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.548 15:45:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.548 [2024-11-25 15:45:13.116713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:14.548 [2024-11-25 15:45:13.129936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:14.548 15:45:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.548 15:45:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:14.548 [2024-11-25 15:45:13.131782] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:15.489 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.489 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.489 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.489 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.489 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.489 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.489 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.489 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.489 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.489 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.750 "name": "raid_bdev1", 00:18:15.750 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:15.750 "strip_size_kb": 0, 00:18:15.750 "state": "online", 00:18:15.750 "raid_level": "raid1", 00:18:15.750 "superblock": true, 00:18:15.750 "num_base_bdevs": 2, 00:18:15.750 "num_base_bdevs_discovered": 2, 00:18:15.750 "num_base_bdevs_operational": 2, 00:18:15.750 "process": { 00:18:15.750 "type": "rebuild", 00:18:15.750 "target": "spare", 00:18:15.750 "progress": { 00:18:15.750 "blocks": 2560, 00:18:15.750 "percent": 32 00:18:15.750 } 00:18:15.750 }, 00:18:15.750 "base_bdevs_list": [ 00:18:15.750 { 00:18:15.750 "name": "spare", 00:18:15.750 "uuid": "b10c6ca0-86f7-5e6a-9d73-5db15be00e31", 00:18:15.750 "is_configured": true, 00:18:15.750 "data_offset": 256, 00:18:15.750 "data_size": 7936 00:18:15.750 }, 00:18:15.750 { 00:18:15.750 "name": "BaseBdev2", 00:18:15.750 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:15.750 "is_configured": true, 00:18:15.750 "data_offset": 256, 00:18:15.750 "data_size": 7936 00:18:15.750 } 00:18:15.750 ] 00:18:15.750 }' 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:15.750 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=687 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.750 "name": "raid_bdev1", 00:18:15.750 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:15.750 "strip_size_kb": 0, 00:18:15.750 "state": "online", 00:18:15.750 "raid_level": "raid1", 00:18:15.750 "superblock": true, 00:18:15.750 "num_base_bdevs": 2, 00:18:15.750 "num_base_bdevs_discovered": 2, 00:18:15.750 "num_base_bdevs_operational": 2, 00:18:15.750 "process": { 00:18:15.750 "type": "rebuild", 00:18:15.750 "target": "spare", 00:18:15.750 "progress": { 00:18:15.750 "blocks": 2816, 00:18:15.750 "percent": 35 00:18:15.750 } 00:18:15.750 }, 00:18:15.750 "base_bdevs_list": [ 00:18:15.750 { 00:18:15.750 "name": "spare", 00:18:15.750 "uuid": "b10c6ca0-86f7-5e6a-9d73-5db15be00e31", 00:18:15.750 "is_configured": true, 00:18:15.750 "data_offset": 256, 00:18:15.750 "data_size": 7936 00:18:15.750 }, 00:18:15.750 { 00:18:15.750 "name": "BaseBdev2", 00:18:15.750 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:15.750 "is_configured": true, 00:18:15.750 "data_offset": 256, 00:18:15.750 "data_size": 7936 00:18:15.750 } 00:18:15.750 ] 00:18:15.750 }' 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.750 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.011 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.011 15:45:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:16.952 15:45:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:16.952 15:45:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.952 15:45:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.952 15:45:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.952 15:45:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.952 15:45:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.952 15:45:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.952 15:45:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.952 15:45:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.952 15:45:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.952 15:45:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.952 15:45:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.952 "name": "raid_bdev1", 00:18:16.952 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:16.952 "strip_size_kb": 0, 00:18:16.952 "state": "online", 00:18:16.952 "raid_level": "raid1", 00:18:16.952 "superblock": true, 00:18:16.952 "num_base_bdevs": 2, 00:18:16.952 "num_base_bdevs_discovered": 2, 00:18:16.952 "num_base_bdevs_operational": 2, 00:18:16.952 "process": { 00:18:16.952 "type": "rebuild", 00:18:16.952 "target": "spare", 00:18:16.952 "progress": { 00:18:16.952 "blocks": 5888, 00:18:16.952 "percent": 74 00:18:16.952 } 00:18:16.952 }, 00:18:16.952 "base_bdevs_list": [ 00:18:16.952 { 00:18:16.952 "name": "spare", 00:18:16.952 "uuid": "b10c6ca0-86f7-5e6a-9d73-5db15be00e31", 00:18:16.952 "is_configured": true, 00:18:16.952 "data_offset": 256, 00:18:16.952 "data_size": 7936 00:18:16.952 }, 00:18:16.952 { 00:18:16.952 "name": "BaseBdev2", 00:18:16.952 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:16.952 "is_configured": true, 00:18:16.952 "data_offset": 256, 00:18:16.952 "data_size": 7936 00:18:16.952 } 00:18:16.952 ] 00:18:16.952 }' 00:18:16.952 15:45:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.952 15:45:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.952 15:45:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.952 15:45:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.952 15:45:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:17.893 [2024-11-25 15:45:16.243164] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:17.893 [2024-11-25 15:45:16.243278] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:17.893 [2024-11-25 15:45:16.243421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.153 "name": "raid_bdev1", 00:18:18.153 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:18.153 "strip_size_kb": 0, 00:18:18.153 "state": "online", 00:18:18.153 "raid_level": "raid1", 00:18:18.153 "superblock": true, 00:18:18.153 "num_base_bdevs": 2, 00:18:18.153 "num_base_bdevs_discovered": 2, 00:18:18.153 "num_base_bdevs_operational": 2, 00:18:18.153 "base_bdevs_list": [ 00:18:18.153 { 00:18:18.153 "name": "spare", 00:18:18.153 "uuid": "b10c6ca0-86f7-5e6a-9d73-5db15be00e31", 00:18:18.153 "is_configured": true, 00:18:18.153 "data_offset": 256, 00:18:18.153 "data_size": 7936 00:18:18.153 }, 00:18:18.153 { 00:18:18.153 "name": "BaseBdev2", 00:18:18.153 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:18.153 "is_configured": true, 00:18:18.153 "data_offset": 256, 00:18:18.153 "data_size": 7936 00:18:18.153 } 00:18:18.153 ] 00:18:18.153 }' 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.153 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:18.154 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:18.154 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.154 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.154 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.154 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.154 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.154 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.154 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.154 "name": "raid_bdev1", 00:18:18.154 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:18.154 "strip_size_kb": 0, 00:18:18.154 "state": "online", 00:18:18.154 "raid_level": "raid1", 00:18:18.154 "superblock": true, 00:18:18.154 "num_base_bdevs": 2, 00:18:18.154 "num_base_bdevs_discovered": 2, 00:18:18.154 "num_base_bdevs_operational": 2, 00:18:18.154 "base_bdevs_list": [ 00:18:18.154 { 00:18:18.154 "name": "spare", 00:18:18.154 "uuid": "b10c6ca0-86f7-5e6a-9d73-5db15be00e31", 00:18:18.154 "is_configured": true, 00:18:18.154 "data_offset": 256, 00:18:18.154 "data_size": 7936 00:18:18.154 }, 00:18:18.154 { 00:18:18.154 "name": "BaseBdev2", 00:18:18.154 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:18.154 "is_configured": true, 00:18:18.154 "data_offset": 256, 00:18:18.154 "data_size": 7936 00:18:18.154 } 00:18:18.154 ] 00:18:18.154 }' 00:18:18.154 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.413 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:18.413 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.414 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:18.414 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:18.414 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.414 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.414 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.414 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.414 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:18.414 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.414 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.414 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.414 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.414 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.414 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.414 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.414 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.414 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.414 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.414 "name": "raid_bdev1", 00:18:18.414 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:18.414 "strip_size_kb": 0, 00:18:18.414 "state": "online", 00:18:18.414 "raid_level": "raid1", 00:18:18.414 "superblock": true, 00:18:18.414 "num_base_bdevs": 2, 00:18:18.414 "num_base_bdevs_discovered": 2, 00:18:18.414 "num_base_bdevs_operational": 2, 00:18:18.414 "base_bdevs_list": [ 00:18:18.414 { 00:18:18.414 "name": "spare", 00:18:18.414 "uuid": "b10c6ca0-86f7-5e6a-9d73-5db15be00e31", 00:18:18.414 "is_configured": true, 00:18:18.414 "data_offset": 256, 00:18:18.414 "data_size": 7936 00:18:18.414 }, 00:18:18.414 { 00:18:18.414 "name": "BaseBdev2", 00:18:18.414 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:18.414 "is_configured": true, 00:18:18.414 "data_offset": 256, 00:18:18.414 "data_size": 7936 00:18:18.414 } 00:18:18.414 ] 00:18:18.414 }' 00:18:18.414 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.414 15:45:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.674 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:18.674 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.674 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.674 [2024-11-25 15:45:17.308470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:18.674 [2024-11-25 15:45:17.308546] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:18.674 [2024-11-25 15:45:17.308642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.674 [2024-11-25 15:45:17.308735] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:18.674 [2024-11-25 15:45:17.308775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:18.674 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.674 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.674 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:18.674 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.674 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.674 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:18.935 /dev/nbd0 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.935 1+0 records in 00:18:18.935 1+0 records out 00:18:18.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559894 s, 7.3 MB/s 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:18.935 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:19.195 /dev/nbd1 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:19.195 1+0 records in 00:18:19.195 1+0 records out 00:18:19.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423085 s, 9.7 MB/s 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:19.195 15:45:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:19.456 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:19.456 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:19.456 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:19.456 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:19.456 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:19.456 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.456 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:19.716 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:19.716 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:19.716 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:19.716 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:19.716 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:19.716 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:19.716 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:19.716 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:19.716 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.716 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:19.976 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:19.976 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:19.976 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:19.976 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:19.976 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.977 [2024-11-25 15:45:18.462017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:19.977 [2024-11-25 15:45:18.462067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.977 [2024-11-25 15:45:18.462089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:19.977 [2024-11-25 15:45:18.462097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.977 [2024-11-25 15:45:18.464025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.977 [2024-11-25 15:45:18.464060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:19.977 [2024-11-25 15:45:18.464120] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:19.977 [2024-11-25 15:45:18.464176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:19.977 [2024-11-25 15:45:18.464311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:19.977 spare 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.977 [2024-11-25 15:45:18.564196] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:19.977 [2024-11-25 15:45:18.564263] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:19.977 [2024-11-25 15:45:18.564378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:19.977 [2024-11-25 15:45:18.564533] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:19.977 [2024-11-25 15:45:18.564542] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:19.977 [2024-11-25 15:45:18.564664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.977 "name": "raid_bdev1", 00:18:19.977 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:19.977 "strip_size_kb": 0, 00:18:19.977 "state": "online", 00:18:19.977 "raid_level": "raid1", 00:18:19.977 "superblock": true, 00:18:19.977 "num_base_bdevs": 2, 00:18:19.977 "num_base_bdevs_discovered": 2, 00:18:19.977 "num_base_bdevs_operational": 2, 00:18:19.977 "base_bdevs_list": [ 00:18:19.977 { 00:18:19.977 "name": "spare", 00:18:19.977 "uuid": "b10c6ca0-86f7-5e6a-9d73-5db15be00e31", 00:18:19.977 "is_configured": true, 00:18:19.977 "data_offset": 256, 00:18:19.977 "data_size": 7936 00:18:19.977 }, 00:18:19.977 { 00:18:19.977 "name": "BaseBdev2", 00:18:19.977 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:19.977 "is_configured": true, 00:18:19.977 "data_offset": 256, 00:18:19.977 "data_size": 7936 00:18:19.977 } 00:18:19.977 ] 00:18:19.977 }' 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.977 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.548 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.548 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.548 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.548 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.548 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.548 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.548 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.548 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.548 15:45:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.548 "name": "raid_bdev1", 00:18:20.548 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:20.548 "strip_size_kb": 0, 00:18:20.548 "state": "online", 00:18:20.548 "raid_level": "raid1", 00:18:20.548 "superblock": true, 00:18:20.548 "num_base_bdevs": 2, 00:18:20.548 "num_base_bdevs_discovered": 2, 00:18:20.548 "num_base_bdevs_operational": 2, 00:18:20.548 "base_bdevs_list": [ 00:18:20.548 { 00:18:20.548 "name": "spare", 00:18:20.548 "uuid": "b10c6ca0-86f7-5e6a-9d73-5db15be00e31", 00:18:20.548 "is_configured": true, 00:18:20.548 "data_offset": 256, 00:18:20.548 "data_size": 7936 00:18:20.548 }, 00:18:20.548 { 00:18:20.548 "name": "BaseBdev2", 00:18:20.548 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:20.548 "is_configured": true, 00:18:20.548 "data_offset": 256, 00:18:20.548 "data_size": 7936 00:18:20.548 } 00:18:20.548 ] 00:18:20.548 }' 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.548 [2024-11-25 15:45:19.180833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.548 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.549 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.549 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.549 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.549 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.549 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.549 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.549 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.807 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.807 "name": "raid_bdev1", 00:18:20.807 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:20.807 "strip_size_kb": 0, 00:18:20.807 "state": "online", 00:18:20.807 "raid_level": "raid1", 00:18:20.807 "superblock": true, 00:18:20.807 "num_base_bdevs": 2, 00:18:20.807 "num_base_bdevs_discovered": 1, 00:18:20.807 "num_base_bdevs_operational": 1, 00:18:20.807 "base_bdevs_list": [ 00:18:20.807 { 00:18:20.807 "name": null, 00:18:20.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.807 "is_configured": false, 00:18:20.807 "data_offset": 0, 00:18:20.807 "data_size": 7936 00:18:20.807 }, 00:18:20.807 { 00:18:20.807 "name": "BaseBdev2", 00:18:20.807 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:20.807 "is_configured": true, 00:18:20.807 "data_offset": 256, 00:18:20.807 "data_size": 7936 00:18:20.807 } 00:18:20.807 ] 00:18:20.807 }' 00:18:20.807 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.807 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.067 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:21.067 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.067 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.067 [2024-11-25 15:45:19.683980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:21.067 [2024-11-25 15:45:19.684170] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:21.067 [2024-11-25 15:45:19.684233] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:21.067 [2024-11-25 15:45:19.684293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:21.067 [2024-11-25 15:45:19.697541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:21.067 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.067 15:45:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:21.067 [2024-11-25 15:45:19.699285] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.451 "name": "raid_bdev1", 00:18:22.451 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:22.451 "strip_size_kb": 0, 00:18:22.451 "state": "online", 00:18:22.451 "raid_level": "raid1", 00:18:22.451 "superblock": true, 00:18:22.451 "num_base_bdevs": 2, 00:18:22.451 "num_base_bdevs_discovered": 2, 00:18:22.451 "num_base_bdevs_operational": 2, 00:18:22.451 "process": { 00:18:22.451 "type": "rebuild", 00:18:22.451 "target": "spare", 00:18:22.451 "progress": { 00:18:22.451 "blocks": 2560, 00:18:22.451 "percent": 32 00:18:22.451 } 00:18:22.451 }, 00:18:22.451 "base_bdevs_list": [ 00:18:22.451 { 00:18:22.451 "name": "spare", 00:18:22.451 "uuid": "b10c6ca0-86f7-5e6a-9d73-5db15be00e31", 00:18:22.451 "is_configured": true, 00:18:22.451 "data_offset": 256, 00:18:22.451 "data_size": 7936 00:18:22.451 }, 00:18:22.451 { 00:18:22.451 "name": "BaseBdev2", 00:18:22.451 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:22.451 "is_configured": true, 00:18:22.451 "data_offset": 256, 00:18:22.451 "data_size": 7936 00:18:22.451 } 00:18:22.451 ] 00:18:22.451 }' 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.451 [2024-11-25 15:45:20.859098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:22.451 [2024-11-25 15:45:20.903965] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:22.451 [2024-11-25 15:45:20.904098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.451 [2024-11-25 15:45:20.904132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:22.451 [2024-11-25 15:45:20.904185] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.451 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.451 "name": "raid_bdev1", 00:18:22.451 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:22.451 "strip_size_kb": 0, 00:18:22.451 "state": "online", 00:18:22.451 "raid_level": "raid1", 00:18:22.451 "superblock": true, 00:18:22.451 "num_base_bdevs": 2, 00:18:22.451 "num_base_bdevs_discovered": 1, 00:18:22.451 "num_base_bdevs_operational": 1, 00:18:22.451 "base_bdevs_list": [ 00:18:22.451 { 00:18:22.451 "name": null, 00:18:22.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.451 "is_configured": false, 00:18:22.451 "data_offset": 0, 00:18:22.451 "data_size": 7936 00:18:22.451 }, 00:18:22.451 { 00:18:22.451 "name": "BaseBdev2", 00:18:22.451 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:22.451 "is_configured": true, 00:18:22.451 "data_offset": 256, 00:18:22.451 "data_size": 7936 00:18:22.451 } 00:18:22.451 ] 00:18:22.452 }' 00:18:22.452 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.452 15:45:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.023 15:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:23.023 15:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.023 15:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.023 [2024-11-25 15:45:21.402177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:23.023 [2024-11-25 15:45:21.402231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.023 [2024-11-25 15:45:21.402256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:23.023 [2024-11-25 15:45:21.402267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.023 [2024-11-25 15:45:21.402491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.023 [2024-11-25 15:45:21.402511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:23.023 [2024-11-25 15:45:21.402559] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:23.023 [2024-11-25 15:45:21.402573] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:23.023 [2024-11-25 15:45:21.402582] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:23.023 [2024-11-25 15:45:21.402601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:23.023 [2024-11-25 15:45:21.416806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:23.023 spare 00:18:23.023 15:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.023 15:45:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:23.023 [2024-11-25 15:45:21.418619] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:23.964 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.964 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.964 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.964 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.964 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.964 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.964 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.964 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.964 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.964 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.964 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.964 "name": "raid_bdev1", 00:18:23.964 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:23.964 "strip_size_kb": 0, 00:18:23.964 "state": "online", 00:18:23.964 "raid_level": "raid1", 00:18:23.964 "superblock": true, 00:18:23.964 "num_base_bdevs": 2, 00:18:23.964 "num_base_bdevs_discovered": 2, 00:18:23.964 "num_base_bdevs_operational": 2, 00:18:23.964 "process": { 00:18:23.964 "type": "rebuild", 00:18:23.964 "target": "spare", 00:18:23.964 "progress": { 00:18:23.964 "blocks": 2560, 00:18:23.964 "percent": 32 00:18:23.964 } 00:18:23.964 }, 00:18:23.964 "base_bdevs_list": [ 00:18:23.964 { 00:18:23.964 "name": "spare", 00:18:23.964 "uuid": "b10c6ca0-86f7-5e6a-9d73-5db15be00e31", 00:18:23.964 "is_configured": true, 00:18:23.964 "data_offset": 256, 00:18:23.964 "data_size": 7936 00:18:23.964 }, 00:18:23.964 { 00:18:23.964 "name": "BaseBdev2", 00:18:23.964 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:23.964 "is_configured": true, 00:18:23.964 "data_offset": 256, 00:18:23.964 "data_size": 7936 00:18:23.964 } 00:18:23.964 ] 00:18:23.964 }' 00:18:23.964 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.964 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:23.964 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.964 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.964 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:23.964 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.964 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.964 [2024-11-25 15:45:22.555239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:23.964 [2024-11-25 15:45:22.623065] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:23.964 [2024-11-25 15:45:22.623114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.964 [2024-11-25 15:45:22.623129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:23.964 [2024-11-25 15:45:22.623135] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:23.964 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.225 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:24.225 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.225 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.225 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.225 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.225 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:24.225 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.225 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.225 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.225 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.225 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.225 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.225 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.225 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.225 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.225 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.225 "name": "raid_bdev1", 00:18:24.225 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:24.225 "strip_size_kb": 0, 00:18:24.225 "state": "online", 00:18:24.225 "raid_level": "raid1", 00:18:24.225 "superblock": true, 00:18:24.225 "num_base_bdevs": 2, 00:18:24.225 "num_base_bdevs_discovered": 1, 00:18:24.225 "num_base_bdevs_operational": 1, 00:18:24.225 "base_bdevs_list": [ 00:18:24.225 { 00:18:24.225 "name": null, 00:18:24.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.225 "is_configured": false, 00:18:24.225 "data_offset": 0, 00:18:24.225 "data_size": 7936 00:18:24.225 }, 00:18:24.225 { 00:18:24.225 "name": "BaseBdev2", 00:18:24.225 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:24.225 "is_configured": true, 00:18:24.225 "data_offset": 256, 00:18:24.225 "data_size": 7936 00:18:24.225 } 00:18:24.225 ] 00:18:24.225 }' 00:18:24.225 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.225 15:45:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.485 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:24.485 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.485 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:24.485 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:24.485 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.485 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.485 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.485 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.485 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.485 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.485 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.485 "name": "raid_bdev1", 00:18:24.485 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:24.485 "strip_size_kb": 0, 00:18:24.485 "state": "online", 00:18:24.485 "raid_level": "raid1", 00:18:24.485 "superblock": true, 00:18:24.485 "num_base_bdevs": 2, 00:18:24.485 "num_base_bdevs_discovered": 1, 00:18:24.485 "num_base_bdevs_operational": 1, 00:18:24.485 "base_bdevs_list": [ 00:18:24.485 { 00:18:24.485 "name": null, 00:18:24.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.485 "is_configured": false, 00:18:24.485 "data_offset": 0, 00:18:24.485 "data_size": 7936 00:18:24.485 }, 00:18:24.485 { 00:18:24.485 "name": "BaseBdev2", 00:18:24.485 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:24.485 "is_configured": true, 00:18:24.485 "data_offset": 256, 00:18:24.485 "data_size": 7936 00:18:24.485 } 00:18:24.485 ] 00:18:24.485 }' 00:18:24.485 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.745 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:24.745 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.745 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:24.745 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:24.745 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.745 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.745 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.745 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:24.745 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.745 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.745 [2024-11-25 15:45:23.269570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:24.745 [2024-11-25 15:45:23.269616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.745 [2024-11-25 15:45:23.269639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:24.745 [2024-11-25 15:45:23.269649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.745 [2024-11-25 15:45:23.269847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.745 [2024-11-25 15:45:23.269859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:24.745 [2024-11-25 15:45:23.269903] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:24.745 [2024-11-25 15:45:23.269915] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:24.745 [2024-11-25 15:45:23.269928] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:24.745 [2024-11-25 15:45:23.269937] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:24.745 BaseBdev1 00:18:24.745 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.745 15:45:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:25.686 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:25.686 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.686 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.686 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.686 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.686 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:25.686 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.686 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.686 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.686 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.686 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.686 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.686 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.686 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.686 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.686 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.686 "name": "raid_bdev1", 00:18:25.686 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:25.686 "strip_size_kb": 0, 00:18:25.686 "state": "online", 00:18:25.686 "raid_level": "raid1", 00:18:25.686 "superblock": true, 00:18:25.686 "num_base_bdevs": 2, 00:18:25.686 "num_base_bdevs_discovered": 1, 00:18:25.686 "num_base_bdevs_operational": 1, 00:18:25.686 "base_bdevs_list": [ 00:18:25.686 { 00:18:25.686 "name": null, 00:18:25.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.686 "is_configured": false, 00:18:25.686 "data_offset": 0, 00:18:25.686 "data_size": 7936 00:18:25.686 }, 00:18:25.686 { 00:18:25.686 "name": "BaseBdev2", 00:18:25.686 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:25.686 "is_configured": true, 00:18:25.686 "data_offset": 256, 00:18:25.686 "data_size": 7936 00:18:25.686 } 00:18:25.686 ] 00:18:25.686 }' 00:18:25.686 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.686 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.255 "name": "raid_bdev1", 00:18:26.255 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:26.255 "strip_size_kb": 0, 00:18:26.255 "state": "online", 00:18:26.255 "raid_level": "raid1", 00:18:26.255 "superblock": true, 00:18:26.255 "num_base_bdevs": 2, 00:18:26.255 "num_base_bdevs_discovered": 1, 00:18:26.255 "num_base_bdevs_operational": 1, 00:18:26.255 "base_bdevs_list": [ 00:18:26.255 { 00:18:26.255 "name": null, 00:18:26.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.255 "is_configured": false, 00:18:26.255 "data_offset": 0, 00:18:26.255 "data_size": 7936 00:18:26.255 }, 00:18:26.255 { 00:18:26.255 "name": "BaseBdev2", 00:18:26.255 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:26.255 "is_configured": true, 00:18:26.255 "data_offset": 256, 00:18:26.255 "data_size": 7936 00:18:26.255 } 00:18:26.255 ] 00:18:26.255 }' 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.255 [2024-11-25 15:45:24.878925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:26.255 [2024-11-25 15:45:24.879063] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:26.255 [2024-11-25 15:45:24.879078] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:26.255 request: 00:18:26.255 { 00:18:26.255 "base_bdev": "BaseBdev1", 00:18:26.255 "raid_bdev": "raid_bdev1", 00:18:26.255 "method": "bdev_raid_add_base_bdev", 00:18:26.255 "req_id": 1 00:18:26.255 } 00:18:26.255 Got JSON-RPC error response 00:18:26.255 response: 00:18:26.255 { 00:18:26.255 "code": -22, 00:18:26.255 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:26.255 } 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.255 15:45:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:27.633 15:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.633 15:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.633 15:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.633 15:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.633 15:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.633 15:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:27.633 15:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.633 15:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.633 15:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.633 15:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.633 15:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.633 15:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.633 15:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.633 15:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.633 15:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.633 15:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.633 "name": "raid_bdev1", 00:18:27.633 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:27.633 "strip_size_kb": 0, 00:18:27.633 "state": "online", 00:18:27.633 "raid_level": "raid1", 00:18:27.633 "superblock": true, 00:18:27.633 "num_base_bdevs": 2, 00:18:27.633 "num_base_bdevs_discovered": 1, 00:18:27.633 "num_base_bdevs_operational": 1, 00:18:27.633 "base_bdevs_list": [ 00:18:27.633 { 00:18:27.633 "name": null, 00:18:27.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.633 "is_configured": false, 00:18:27.633 "data_offset": 0, 00:18:27.633 "data_size": 7936 00:18:27.633 }, 00:18:27.633 { 00:18:27.633 "name": "BaseBdev2", 00:18:27.633 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:27.633 "is_configured": true, 00:18:27.633 "data_offset": 256, 00:18:27.633 "data_size": 7936 00:18:27.633 } 00:18:27.633 ] 00:18:27.633 }' 00:18:27.633 15:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.633 15:45:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.892 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:27.892 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.892 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:27.892 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:27.892 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.892 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.892 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.892 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.892 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.892 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.892 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.892 "name": "raid_bdev1", 00:18:27.892 "uuid": "fd489d15-f52f-4f58-b167-eaf78af94e3d", 00:18:27.892 "strip_size_kb": 0, 00:18:27.892 "state": "online", 00:18:27.892 "raid_level": "raid1", 00:18:27.892 "superblock": true, 00:18:27.892 "num_base_bdevs": 2, 00:18:27.892 "num_base_bdevs_discovered": 1, 00:18:27.892 "num_base_bdevs_operational": 1, 00:18:27.892 "base_bdevs_list": [ 00:18:27.892 { 00:18:27.892 "name": null, 00:18:27.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.892 "is_configured": false, 00:18:27.892 "data_offset": 0, 00:18:27.892 "data_size": 7936 00:18:27.892 }, 00:18:27.892 { 00:18:27.892 "name": "BaseBdev2", 00:18:27.892 "uuid": "7c2b9020-7e4b-5fed-ae64-c0f3d0b411e4", 00:18:27.892 "is_configured": true, 00:18:27.892 "data_offset": 256, 00:18:27.892 "data_size": 7936 00:18:27.892 } 00:18:27.892 ] 00:18:27.892 }' 00:18:27.892 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.892 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:27.892 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.892 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:27.892 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87369 00:18:27.892 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87369 ']' 00:18:27.893 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87369 00:18:27.893 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:27.893 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.893 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87369 00:18:27.893 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:27.893 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:27.893 killing process with pid 87369 00:18:27.893 Received shutdown signal, test time was about 60.000000 seconds 00:18:27.893 00:18:27.893 Latency(us) 00:18:27.893 [2024-11-25T15:45:26.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.893 [2024-11-25T15:45:26.574Z] =================================================================================================================== 00:18:27.893 [2024-11-25T15:45:26.574Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:27.893 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87369' 00:18:27.893 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87369 00:18:27.893 [2024-11-25 15:45:26.531687] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:27.893 [2024-11-25 15:45:26.531791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.893 [2024-11-25 15:45:26.531832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.893 [2024-11-25 15:45:26.531843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:27.893 15:45:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87369 00:18:28.152 [2024-11-25 15:45:26.829987] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:29.533 15:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:29.533 00:18:29.533 real 0m19.830s 00:18:29.533 user 0m26.103s 00:18:29.533 sys 0m2.644s 00:18:29.533 15:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:29.533 15:45:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.533 ************************************ 00:18:29.533 END TEST raid_rebuild_test_sb_md_separate 00:18:29.533 ************************************ 00:18:29.533 15:45:27 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:29.533 15:45:27 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:29.533 15:45:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:29.533 15:45:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:29.533 15:45:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.533 ************************************ 00:18:29.533 START TEST raid_state_function_test_sb_md_interleaved 00:18:29.533 ************************************ 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:29.533 Process raid pid: 88057 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88057 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88057' 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88057 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88057 ']' 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.533 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.534 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.534 15:45:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.534 [2024-11-25 15:45:28.017227] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:18:29.534 [2024-11-25 15:45:28.017452] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.534 [2024-11-25 15:45:28.191178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.794 [2024-11-25 15:45:28.299733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.054 [2024-11-25 15:45:28.492579] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.054 [2024-11-25 15:45:28.492663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.314 [2024-11-25 15:45:28.828475] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:30.314 [2024-11-25 15:45:28.828522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:30.314 [2024-11-25 15:45:28.828532] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:30.314 [2024-11-25 15:45:28.828542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.314 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.314 "name": "Existed_Raid", 00:18:30.314 "uuid": "063147de-4b8a-45b7-9e0b-f8788583d6c3", 00:18:30.314 "strip_size_kb": 0, 00:18:30.314 "state": "configuring", 00:18:30.314 "raid_level": "raid1", 00:18:30.314 "superblock": true, 00:18:30.314 "num_base_bdevs": 2, 00:18:30.314 "num_base_bdevs_discovered": 0, 00:18:30.314 "num_base_bdevs_operational": 2, 00:18:30.314 "base_bdevs_list": [ 00:18:30.315 { 00:18:30.315 "name": "BaseBdev1", 00:18:30.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.315 "is_configured": false, 00:18:30.315 "data_offset": 0, 00:18:30.315 "data_size": 0 00:18:30.315 }, 00:18:30.315 { 00:18:30.315 "name": "BaseBdev2", 00:18:30.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.315 "is_configured": false, 00:18:30.315 "data_offset": 0, 00:18:30.315 "data_size": 0 00:18:30.315 } 00:18:30.315 ] 00:18:30.315 }' 00:18:30.315 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.315 15:45:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.885 [2024-11-25 15:45:29.287643] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:30.885 [2024-11-25 15:45:29.287715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.885 [2024-11-25 15:45:29.299631] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:30.885 [2024-11-25 15:45:29.299703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:30.885 [2024-11-25 15:45:29.299728] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:30.885 [2024-11-25 15:45:29.299752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.885 [2024-11-25 15:45:29.344389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:30.885 BaseBdev1 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.885 [ 00:18:30.885 { 00:18:30.885 "name": "BaseBdev1", 00:18:30.885 "aliases": [ 00:18:30.885 "a7969a69-aa75-4bc6-aab7-72ec1545cd56" 00:18:30.885 ], 00:18:30.885 "product_name": "Malloc disk", 00:18:30.885 "block_size": 4128, 00:18:30.885 "num_blocks": 8192, 00:18:30.885 "uuid": "a7969a69-aa75-4bc6-aab7-72ec1545cd56", 00:18:30.885 "md_size": 32, 00:18:30.885 "md_interleave": true, 00:18:30.885 "dif_type": 0, 00:18:30.885 "assigned_rate_limits": { 00:18:30.885 "rw_ios_per_sec": 0, 00:18:30.885 "rw_mbytes_per_sec": 0, 00:18:30.885 "r_mbytes_per_sec": 0, 00:18:30.885 "w_mbytes_per_sec": 0 00:18:30.885 }, 00:18:30.885 "claimed": true, 00:18:30.885 "claim_type": "exclusive_write", 00:18:30.885 "zoned": false, 00:18:30.885 "supported_io_types": { 00:18:30.885 "read": true, 00:18:30.885 "write": true, 00:18:30.885 "unmap": true, 00:18:30.885 "flush": true, 00:18:30.885 "reset": true, 00:18:30.885 "nvme_admin": false, 00:18:30.885 "nvme_io": false, 00:18:30.885 "nvme_io_md": false, 00:18:30.885 "write_zeroes": true, 00:18:30.885 "zcopy": true, 00:18:30.885 "get_zone_info": false, 00:18:30.885 "zone_management": false, 00:18:30.885 "zone_append": false, 00:18:30.885 "compare": false, 00:18:30.885 "compare_and_write": false, 00:18:30.885 "abort": true, 00:18:30.885 "seek_hole": false, 00:18:30.885 "seek_data": false, 00:18:30.885 "copy": true, 00:18:30.885 "nvme_iov_md": false 00:18:30.885 }, 00:18:30.885 "memory_domains": [ 00:18:30.885 { 00:18:30.885 "dma_device_id": "system", 00:18:30.885 "dma_device_type": 1 00:18:30.885 }, 00:18:30.885 { 00:18:30.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.885 "dma_device_type": 2 00:18:30.885 } 00:18:30.885 ], 00:18:30.885 "driver_specific": {} 00:18:30.885 } 00:18:30.885 ] 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.885 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.886 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.886 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.886 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.886 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.886 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.886 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.886 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.886 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.886 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.886 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.886 "name": "Existed_Raid", 00:18:30.886 "uuid": "477e9b92-1167-4585-a09f-04cf64a9c53f", 00:18:30.886 "strip_size_kb": 0, 00:18:30.886 "state": "configuring", 00:18:30.886 "raid_level": "raid1", 00:18:30.886 "superblock": true, 00:18:30.886 "num_base_bdevs": 2, 00:18:30.886 "num_base_bdevs_discovered": 1, 00:18:30.886 "num_base_bdevs_operational": 2, 00:18:30.886 "base_bdevs_list": [ 00:18:30.886 { 00:18:30.886 "name": "BaseBdev1", 00:18:30.886 "uuid": "a7969a69-aa75-4bc6-aab7-72ec1545cd56", 00:18:30.886 "is_configured": true, 00:18:30.886 "data_offset": 256, 00:18:30.886 "data_size": 7936 00:18:30.886 }, 00:18:30.886 { 00:18:30.886 "name": "BaseBdev2", 00:18:30.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.886 "is_configured": false, 00:18:30.886 "data_offset": 0, 00:18:30.886 "data_size": 0 00:18:30.886 } 00:18:30.886 ] 00:18:30.886 }' 00:18:30.886 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.886 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.146 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:31.146 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.146 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.406 [2024-11-25 15:45:29.827689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:31.406 [2024-11-25 15:45:29.827767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:31.406 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.406 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:31.406 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.406 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.406 [2024-11-25 15:45:29.839750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.406 [2024-11-25 15:45:29.841480] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:31.406 [2024-11-25 15:45:29.841562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:31.406 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.406 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:31.406 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:31.406 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:31.406 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.406 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.406 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.406 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.406 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.406 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.406 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.406 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.406 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.406 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.407 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.407 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.407 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.407 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.407 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.407 "name": "Existed_Raid", 00:18:31.407 "uuid": "c54604ae-043f-49f1-8178-f95cfd3bebb3", 00:18:31.407 "strip_size_kb": 0, 00:18:31.407 "state": "configuring", 00:18:31.407 "raid_level": "raid1", 00:18:31.407 "superblock": true, 00:18:31.407 "num_base_bdevs": 2, 00:18:31.407 "num_base_bdevs_discovered": 1, 00:18:31.407 "num_base_bdevs_operational": 2, 00:18:31.407 "base_bdevs_list": [ 00:18:31.407 { 00:18:31.407 "name": "BaseBdev1", 00:18:31.407 "uuid": "a7969a69-aa75-4bc6-aab7-72ec1545cd56", 00:18:31.407 "is_configured": true, 00:18:31.407 "data_offset": 256, 00:18:31.407 "data_size": 7936 00:18:31.407 }, 00:18:31.407 { 00:18:31.407 "name": "BaseBdev2", 00:18:31.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.407 "is_configured": false, 00:18:31.407 "data_offset": 0, 00:18:31.407 "data_size": 0 00:18:31.407 } 00:18:31.407 ] 00:18:31.407 }' 00:18:31.407 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.407 15:45:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.667 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:31.667 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.667 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.667 [2024-11-25 15:45:30.311263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:31.667 [2024-11-25 15:45:30.311447] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:31.667 [2024-11-25 15:45:30.311460] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:31.667 [2024-11-25 15:45:30.311542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:31.667 [2024-11-25 15:45:30.311621] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:31.667 [2024-11-25 15:45:30.311631] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:31.667 [2024-11-25 15:45:30.311691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.667 BaseBdev2 00:18:31.667 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.667 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:31.667 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:31.667 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:31.667 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:31.667 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:31.667 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:31.667 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:31.667 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.667 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.667 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.667 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:31.667 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.667 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.667 [ 00:18:31.667 { 00:18:31.667 "name": "BaseBdev2", 00:18:31.667 "aliases": [ 00:18:31.667 "62a06f50-0ae9-44b1-a6c0-94b508c785ca" 00:18:31.667 ], 00:18:31.667 "product_name": "Malloc disk", 00:18:31.667 "block_size": 4128, 00:18:31.667 "num_blocks": 8192, 00:18:31.667 "uuid": "62a06f50-0ae9-44b1-a6c0-94b508c785ca", 00:18:31.667 "md_size": 32, 00:18:31.667 "md_interleave": true, 00:18:31.667 "dif_type": 0, 00:18:31.667 "assigned_rate_limits": { 00:18:31.667 "rw_ios_per_sec": 0, 00:18:31.667 "rw_mbytes_per_sec": 0, 00:18:31.667 "r_mbytes_per_sec": 0, 00:18:31.667 "w_mbytes_per_sec": 0 00:18:31.667 }, 00:18:31.667 "claimed": true, 00:18:31.667 "claim_type": "exclusive_write", 00:18:31.667 "zoned": false, 00:18:31.667 "supported_io_types": { 00:18:31.667 "read": true, 00:18:31.667 "write": true, 00:18:31.667 "unmap": true, 00:18:31.667 "flush": true, 00:18:31.667 "reset": true, 00:18:31.667 "nvme_admin": false, 00:18:31.667 "nvme_io": false, 00:18:31.667 "nvme_io_md": false, 00:18:31.667 "write_zeroes": true, 00:18:31.667 "zcopy": true, 00:18:31.667 "get_zone_info": false, 00:18:31.667 "zone_management": false, 00:18:31.667 "zone_append": false, 00:18:31.667 "compare": false, 00:18:31.927 "compare_and_write": false, 00:18:31.927 "abort": true, 00:18:31.927 "seek_hole": false, 00:18:31.927 "seek_data": false, 00:18:31.927 "copy": true, 00:18:31.927 "nvme_iov_md": false 00:18:31.927 }, 00:18:31.927 "memory_domains": [ 00:18:31.927 { 00:18:31.927 "dma_device_id": "system", 00:18:31.927 "dma_device_type": 1 00:18:31.927 }, 00:18:31.927 { 00:18:31.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.927 "dma_device_type": 2 00:18:31.927 } 00:18:31.927 ], 00:18:31.927 "driver_specific": {} 00:18:31.927 } 00:18:31.927 ] 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.927 "name": "Existed_Raid", 00:18:31.927 "uuid": "c54604ae-043f-49f1-8178-f95cfd3bebb3", 00:18:31.927 "strip_size_kb": 0, 00:18:31.927 "state": "online", 00:18:31.927 "raid_level": "raid1", 00:18:31.927 "superblock": true, 00:18:31.927 "num_base_bdevs": 2, 00:18:31.927 "num_base_bdevs_discovered": 2, 00:18:31.927 "num_base_bdevs_operational": 2, 00:18:31.927 "base_bdevs_list": [ 00:18:31.927 { 00:18:31.927 "name": "BaseBdev1", 00:18:31.927 "uuid": "a7969a69-aa75-4bc6-aab7-72ec1545cd56", 00:18:31.927 "is_configured": true, 00:18:31.927 "data_offset": 256, 00:18:31.927 "data_size": 7936 00:18:31.927 }, 00:18:31.927 { 00:18:31.927 "name": "BaseBdev2", 00:18:31.927 "uuid": "62a06f50-0ae9-44b1-a6c0-94b508c785ca", 00:18:31.927 "is_configured": true, 00:18:31.927 "data_offset": 256, 00:18:31.927 "data_size": 7936 00:18:31.927 } 00:18:31.927 ] 00:18:31.927 }' 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.927 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.187 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:32.187 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:32.187 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:32.187 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:32.187 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:32.187 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:32.188 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:32.188 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:32.188 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.188 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.188 [2024-11-25 15:45:30.858656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.448 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.448 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:32.448 "name": "Existed_Raid", 00:18:32.448 "aliases": [ 00:18:32.448 "c54604ae-043f-49f1-8178-f95cfd3bebb3" 00:18:32.448 ], 00:18:32.448 "product_name": "Raid Volume", 00:18:32.448 "block_size": 4128, 00:18:32.448 "num_blocks": 7936, 00:18:32.448 "uuid": "c54604ae-043f-49f1-8178-f95cfd3bebb3", 00:18:32.448 "md_size": 32, 00:18:32.448 "md_interleave": true, 00:18:32.448 "dif_type": 0, 00:18:32.448 "assigned_rate_limits": { 00:18:32.448 "rw_ios_per_sec": 0, 00:18:32.448 "rw_mbytes_per_sec": 0, 00:18:32.448 "r_mbytes_per_sec": 0, 00:18:32.448 "w_mbytes_per_sec": 0 00:18:32.448 }, 00:18:32.448 "claimed": false, 00:18:32.448 "zoned": false, 00:18:32.448 "supported_io_types": { 00:18:32.448 "read": true, 00:18:32.448 "write": true, 00:18:32.448 "unmap": false, 00:18:32.448 "flush": false, 00:18:32.448 "reset": true, 00:18:32.448 "nvme_admin": false, 00:18:32.448 "nvme_io": false, 00:18:32.448 "nvme_io_md": false, 00:18:32.448 "write_zeroes": true, 00:18:32.448 "zcopy": false, 00:18:32.448 "get_zone_info": false, 00:18:32.448 "zone_management": false, 00:18:32.448 "zone_append": false, 00:18:32.448 "compare": false, 00:18:32.448 "compare_and_write": false, 00:18:32.448 "abort": false, 00:18:32.448 "seek_hole": false, 00:18:32.448 "seek_data": false, 00:18:32.448 "copy": false, 00:18:32.448 "nvme_iov_md": false 00:18:32.448 }, 00:18:32.448 "memory_domains": [ 00:18:32.448 { 00:18:32.448 "dma_device_id": "system", 00:18:32.448 "dma_device_type": 1 00:18:32.448 }, 00:18:32.448 { 00:18:32.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.448 "dma_device_type": 2 00:18:32.448 }, 00:18:32.448 { 00:18:32.448 "dma_device_id": "system", 00:18:32.448 "dma_device_type": 1 00:18:32.448 }, 00:18:32.448 { 00:18:32.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.448 "dma_device_type": 2 00:18:32.448 } 00:18:32.448 ], 00:18:32.448 "driver_specific": { 00:18:32.448 "raid": { 00:18:32.448 "uuid": "c54604ae-043f-49f1-8178-f95cfd3bebb3", 00:18:32.448 "strip_size_kb": 0, 00:18:32.448 "state": "online", 00:18:32.448 "raid_level": "raid1", 00:18:32.448 "superblock": true, 00:18:32.448 "num_base_bdevs": 2, 00:18:32.448 "num_base_bdevs_discovered": 2, 00:18:32.448 "num_base_bdevs_operational": 2, 00:18:32.448 "base_bdevs_list": [ 00:18:32.448 { 00:18:32.448 "name": "BaseBdev1", 00:18:32.448 "uuid": "a7969a69-aa75-4bc6-aab7-72ec1545cd56", 00:18:32.448 "is_configured": true, 00:18:32.448 "data_offset": 256, 00:18:32.448 "data_size": 7936 00:18:32.448 }, 00:18:32.448 { 00:18:32.448 "name": "BaseBdev2", 00:18:32.448 "uuid": "62a06f50-0ae9-44b1-a6c0-94b508c785ca", 00:18:32.448 "is_configured": true, 00:18:32.448 "data_offset": 256, 00:18:32.448 "data_size": 7936 00:18:32.448 } 00:18:32.448 ] 00:18:32.448 } 00:18:32.448 } 00:18:32.448 }' 00:18:32.448 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:32.448 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:32.448 BaseBdev2' 00:18:32.448 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.448 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:32.449 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.449 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.449 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:32.449 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.449 15:45:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.449 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.449 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:32.449 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:32.449 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.449 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:32.449 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.449 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.449 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.449 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.449 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:32.449 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:32.449 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:32.449 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.449 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.449 [2024-11-25 15:45:31.094016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.710 "name": "Existed_Raid", 00:18:32.710 "uuid": "c54604ae-043f-49f1-8178-f95cfd3bebb3", 00:18:32.710 "strip_size_kb": 0, 00:18:32.710 "state": "online", 00:18:32.710 "raid_level": "raid1", 00:18:32.710 "superblock": true, 00:18:32.710 "num_base_bdevs": 2, 00:18:32.710 "num_base_bdevs_discovered": 1, 00:18:32.710 "num_base_bdevs_operational": 1, 00:18:32.710 "base_bdevs_list": [ 00:18:32.710 { 00:18:32.710 "name": null, 00:18:32.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.710 "is_configured": false, 00:18:32.710 "data_offset": 0, 00:18:32.710 "data_size": 7936 00:18:32.710 }, 00:18:32.710 { 00:18:32.710 "name": "BaseBdev2", 00:18:32.710 "uuid": "62a06f50-0ae9-44b1-a6c0-94b508c785ca", 00:18:32.710 "is_configured": true, 00:18:32.710 "data_offset": 256, 00:18:32.710 "data_size": 7936 00:18:32.710 } 00:18:32.710 ] 00:18:32.710 }' 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.710 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.282 [2024-11-25 15:45:31.734734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:33.282 [2024-11-25 15:45:31.734885] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.282 [2024-11-25 15:45:31.825410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.282 [2024-11-25 15:45:31.825516] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.282 [2024-11-25 15:45:31.825561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88057 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88057 ']' 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88057 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88057 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.282 killing process with pid 88057 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88057' 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88057 00:18:33.282 [2024-11-25 15:45:31.923533] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:33.282 15:45:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88057 00:18:33.282 [2024-11-25 15:45:31.939498] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:34.666 15:45:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:34.666 00:18:34.666 real 0m5.049s 00:18:34.666 user 0m7.377s 00:18:34.666 sys 0m0.886s 00:18:34.666 15:45:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.666 ************************************ 00:18:34.666 END TEST raid_state_function_test_sb_md_interleaved 00:18:34.666 ************************************ 00:18:34.666 15:45:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.666 15:45:33 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:34.666 15:45:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:34.666 15:45:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.666 15:45:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:34.666 ************************************ 00:18:34.666 START TEST raid_superblock_test_md_interleaved 00:18:34.666 ************************************ 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88309 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88309 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88309 ']' 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.666 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.667 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.667 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.667 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.667 [2024-11-25 15:45:33.136954] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:18:34.667 [2024-11-25 15:45:33.137164] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88309 ] 00:18:34.667 [2024-11-25 15:45:33.308410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.927 [2024-11-25 15:45:33.412776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.927 [2024-11-25 15:45:33.598313] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.927 [2024-11-25 15:45:33.598363] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:35.501 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.501 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:35.501 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:35.501 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:35.501 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:35.501 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:35.501 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:35.501 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:35.501 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:35.502 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:35.502 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:35.502 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.502 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.502 malloc1 00:18:35.502 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.502 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:35.502 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.502 15:45:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.502 [2024-11-25 15:45:33.997910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:35.502 [2024-11-25 15:45:33.998065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.502 [2024-11-25 15:45:33.998110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:35.502 [2024-11-25 15:45:33.998147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.502 [2024-11-25 15:45:33.999988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.502 [2024-11-25 15:45:34.000085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:35.502 pt1 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.502 malloc2 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.502 [2024-11-25 15:45:34.053716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:35.502 [2024-11-25 15:45:34.053822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.502 [2024-11-25 15:45:34.053860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:35.502 [2024-11-25 15:45:34.053889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.502 [2024-11-25 15:45:34.055754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.502 [2024-11-25 15:45:34.055824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:35.502 pt2 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.502 [2024-11-25 15:45:34.065733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:35.502 [2024-11-25 15:45:34.067555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:35.502 [2024-11-25 15:45:34.067760] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:35.502 [2024-11-25 15:45:34.067774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:35.502 [2024-11-25 15:45:34.067842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:35.502 [2024-11-25 15:45:34.067908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:35.502 [2024-11-25 15:45:34.067930] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:35.502 [2024-11-25 15:45:34.067996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.502 "name": "raid_bdev1", 00:18:35.502 "uuid": "14d77da0-4c4f-4854-a175-1328af615c76", 00:18:35.502 "strip_size_kb": 0, 00:18:35.502 "state": "online", 00:18:35.502 "raid_level": "raid1", 00:18:35.502 "superblock": true, 00:18:35.502 "num_base_bdevs": 2, 00:18:35.502 "num_base_bdevs_discovered": 2, 00:18:35.502 "num_base_bdevs_operational": 2, 00:18:35.502 "base_bdevs_list": [ 00:18:35.502 { 00:18:35.502 "name": "pt1", 00:18:35.502 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.502 "is_configured": true, 00:18:35.502 "data_offset": 256, 00:18:35.502 "data_size": 7936 00:18:35.502 }, 00:18:35.502 { 00:18:35.502 "name": "pt2", 00:18:35.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.502 "is_configured": true, 00:18:35.502 "data_offset": 256, 00:18:35.502 "data_size": 7936 00:18:35.502 } 00:18:35.502 ] 00:18:35.502 }' 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.502 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.115 [2024-11-25 15:45:34.489225] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:36.115 "name": "raid_bdev1", 00:18:36.115 "aliases": [ 00:18:36.115 "14d77da0-4c4f-4854-a175-1328af615c76" 00:18:36.115 ], 00:18:36.115 "product_name": "Raid Volume", 00:18:36.115 "block_size": 4128, 00:18:36.115 "num_blocks": 7936, 00:18:36.115 "uuid": "14d77da0-4c4f-4854-a175-1328af615c76", 00:18:36.115 "md_size": 32, 00:18:36.115 "md_interleave": true, 00:18:36.115 "dif_type": 0, 00:18:36.115 "assigned_rate_limits": { 00:18:36.115 "rw_ios_per_sec": 0, 00:18:36.115 "rw_mbytes_per_sec": 0, 00:18:36.115 "r_mbytes_per_sec": 0, 00:18:36.115 "w_mbytes_per_sec": 0 00:18:36.115 }, 00:18:36.115 "claimed": false, 00:18:36.115 "zoned": false, 00:18:36.115 "supported_io_types": { 00:18:36.115 "read": true, 00:18:36.115 "write": true, 00:18:36.115 "unmap": false, 00:18:36.115 "flush": false, 00:18:36.115 "reset": true, 00:18:36.115 "nvme_admin": false, 00:18:36.115 "nvme_io": false, 00:18:36.115 "nvme_io_md": false, 00:18:36.115 "write_zeroes": true, 00:18:36.115 "zcopy": false, 00:18:36.115 "get_zone_info": false, 00:18:36.115 "zone_management": false, 00:18:36.115 "zone_append": false, 00:18:36.115 "compare": false, 00:18:36.115 "compare_and_write": false, 00:18:36.115 "abort": false, 00:18:36.115 "seek_hole": false, 00:18:36.115 "seek_data": false, 00:18:36.115 "copy": false, 00:18:36.115 "nvme_iov_md": false 00:18:36.115 }, 00:18:36.115 "memory_domains": [ 00:18:36.115 { 00:18:36.115 "dma_device_id": "system", 00:18:36.115 "dma_device_type": 1 00:18:36.115 }, 00:18:36.115 { 00:18:36.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.115 "dma_device_type": 2 00:18:36.115 }, 00:18:36.115 { 00:18:36.115 "dma_device_id": "system", 00:18:36.115 "dma_device_type": 1 00:18:36.115 }, 00:18:36.115 { 00:18:36.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.115 "dma_device_type": 2 00:18:36.115 } 00:18:36.115 ], 00:18:36.115 "driver_specific": { 00:18:36.115 "raid": { 00:18:36.115 "uuid": "14d77da0-4c4f-4854-a175-1328af615c76", 00:18:36.115 "strip_size_kb": 0, 00:18:36.115 "state": "online", 00:18:36.115 "raid_level": "raid1", 00:18:36.115 "superblock": true, 00:18:36.115 "num_base_bdevs": 2, 00:18:36.115 "num_base_bdevs_discovered": 2, 00:18:36.115 "num_base_bdevs_operational": 2, 00:18:36.115 "base_bdevs_list": [ 00:18:36.115 { 00:18:36.115 "name": "pt1", 00:18:36.115 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:36.115 "is_configured": true, 00:18:36.115 "data_offset": 256, 00:18:36.115 "data_size": 7936 00:18:36.115 }, 00:18:36.115 { 00:18:36.115 "name": "pt2", 00:18:36.115 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.115 "is_configured": true, 00:18:36.115 "data_offset": 256, 00:18:36.115 "data_size": 7936 00:18:36.115 } 00:18:36.115 ] 00:18:36.115 } 00:18:36.115 } 00:18:36.115 }' 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:36.115 pt2' 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:36.115 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:36.116 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.116 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.116 [2024-11-25 15:45:34.728742] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.116 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.116 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=14d77da0-4c4f-4854-a175-1328af615c76 00:18:36.116 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 14d77da0-4c4f-4854-a175-1328af615c76 ']' 00:18:36.116 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:36.116 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.116 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.116 [2024-11-25 15:45:34.776422] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:36.116 [2024-11-25 15:45:34.776481] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:36.116 [2024-11-25 15:45:34.776575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:36.116 [2024-11-25 15:45:34.776642] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:36.116 [2024-11-25 15:45:34.776676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:36.116 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.116 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:36.116 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.116 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.116 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.392 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.392 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:36.392 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:36.392 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:36.392 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:36.392 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.392 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.392 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.392 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:36.392 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:36.392 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.392 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.392 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.392 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:36.392 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.393 [2024-11-25 15:45:34.900224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:36.393 [2024-11-25 15:45:34.901920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:36.393 [2024-11-25 15:45:34.901983] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:36.393 [2024-11-25 15:45:34.902043] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:36.393 [2024-11-25 15:45:34.902059] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:36.393 [2024-11-25 15:45:34.902068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:36.393 request: 00:18:36.393 { 00:18:36.393 "name": "raid_bdev1", 00:18:36.393 "raid_level": "raid1", 00:18:36.393 "base_bdevs": [ 00:18:36.393 "malloc1", 00:18:36.393 "malloc2" 00:18:36.393 ], 00:18:36.393 "superblock": false, 00:18:36.393 "method": "bdev_raid_create", 00:18:36.393 "req_id": 1 00:18:36.393 } 00:18:36.393 Got JSON-RPC error response 00:18:36.393 response: 00:18:36.393 { 00:18:36.393 "code": -17, 00:18:36.393 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:36.393 } 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.393 [2024-11-25 15:45:34.968104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:36.393 [2024-11-25 15:45:34.968203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.393 [2024-11-25 15:45:34.968233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:36.393 [2024-11-25 15:45:34.968261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.393 [2024-11-25 15:45:34.970002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.393 [2024-11-25 15:45:34.970099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:36.393 [2024-11-25 15:45:34.970160] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:36.393 [2024-11-25 15:45:34.970236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:36.393 pt1 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.393 15:45:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.393 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.393 "name": "raid_bdev1", 00:18:36.393 "uuid": "14d77da0-4c4f-4854-a175-1328af615c76", 00:18:36.393 "strip_size_kb": 0, 00:18:36.393 "state": "configuring", 00:18:36.393 "raid_level": "raid1", 00:18:36.393 "superblock": true, 00:18:36.393 "num_base_bdevs": 2, 00:18:36.393 "num_base_bdevs_discovered": 1, 00:18:36.393 "num_base_bdevs_operational": 2, 00:18:36.393 "base_bdevs_list": [ 00:18:36.393 { 00:18:36.393 "name": "pt1", 00:18:36.393 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:36.393 "is_configured": true, 00:18:36.393 "data_offset": 256, 00:18:36.393 "data_size": 7936 00:18:36.393 }, 00:18:36.393 { 00:18:36.393 "name": null, 00:18:36.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.393 "is_configured": false, 00:18:36.393 "data_offset": 256, 00:18:36.393 "data_size": 7936 00:18:36.393 } 00:18:36.393 ] 00:18:36.393 }' 00:18:36.393 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.393 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.963 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:36.963 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:36.963 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.964 [2024-11-25 15:45:35.411501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:36.964 [2024-11-25 15:45:35.411551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.964 [2024-11-25 15:45:35.411567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:36.964 [2024-11-25 15:45:35.411576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.964 [2024-11-25 15:45:35.411701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.964 [2024-11-25 15:45:35.411713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:36.964 [2024-11-25 15:45:35.411747] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:36.964 [2024-11-25 15:45:35.411765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:36.964 [2024-11-25 15:45:35.411833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:36.964 [2024-11-25 15:45:35.411843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:36.964 [2024-11-25 15:45:35.411904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:36.964 [2024-11-25 15:45:35.411968] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:36.964 [2024-11-25 15:45:35.411976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:36.964 [2024-11-25 15:45:35.412040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.964 pt2 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.964 "name": "raid_bdev1", 00:18:36.964 "uuid": "14d77da0-4c4f-4854-a175-1328af615c76", 00:18:36.964 "strip_size_kb": 0, 00:18:36.964 "state": "online", 00:18:36.964 "raid_level": "raid1", 00:18:36.964 "superblock": true, 00:18:36.964 "num_base_bdevs": 2, 00:18:36.964 "num_base_bdevs_discovered": 2, 00:18:36.964 "num_base_bdevs_operational": 2, 00:18:36.964 "base_bdevs_list": [ 00:18:36.964 { 00:18:36.964 "name": "pt1", 00:18:36.964 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:36.964 "is_configured": true, 00:18:36.964 "data_offset": 256, 00:18:36.964 "data_size": 7936 00:18:36.964 }, 00:18:36.964 { 00:18:36.964 "name": "pt2", 00:18:36.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.964 "is_configured": true, 00:18:36.964 "data_offset": 256, 00:18:36.964 "data_size": 7936 00:18:36.964 } 00:18:36.964 ] 00:18:36.964 }' 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.964 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.224 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:37.224 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:37.224 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:37.224 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:37.224 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:37.224 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:37.224 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:37.224 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:37.224 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.224 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.224 [2024-11-25 15:45:35.831021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.224 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.224 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:37.224 "name": "raid_bdev1", 00:18:37.224 "aliases": [ 00:18:37.224 "14d77da0-4c4f-4854-a175-1328af615c76" 00:18:37.224 ], 00:18:37.224 "product_name": "Raid Volume", 00:18:37.224 "block_size": 4128, 00:18:37.224 "num_blocks": 7936, 00:18:37.224 "uuid": "14d77da0-4c4f-4854-a175-1328af615c76", 00:18:37.224 "md_size": 32, 00:18:37.224 "md_interleave": true, 00:18:37.224 "dif_type": 0, 00:18:37.224 "assigned_rate_limits": { 00:18:37.224 "rw_ios_per_sec": 0, 00:18:37.224 "rw_mbytes_per_sec": 0, 00:18:37.224 "r_mbytes_per_sec": 0, 00:18:37.224 "w_mbytes_per_sec": 0 00:18:37.224 }, 00:18:37.224 "claimed": false, 00:18:37.224 "zoned": false, 00:18:37.224 "supported_io_types": { 00:18:37.224 "read": true, 00:18:37.224 "write": true, 00:18:37.224 "unmap": false, 00:18:37.224 "flush": false, 00:18:37.224 "reset": true, 00:18:37.224 "nvme_admin": false, 00:18:37.224 "nvme_io": false, 00:18:37.224 "nvme_io_md": false, 00:18:37.224 "write_zeroes": true, 00:18:37.224 "zcopy": false, 00:18:37.224 "get_zone_info": false, 00:18:37.224 "zone_management": false, 00:18:37.224 "zone_append": false, 00:18:37.224 "compare": false, 00:18:37.224 "compare_and_write": false, 00:18:37.224 "abort": false, 00:18:37.224 "seek_hole": false, 00:18:37.224 "seek_data": false, 00:18:37.224 "copy": false, 00:18:37.224 "nvme_iov_md": false 00:18:37.224 }, 00:18:37.224 "memory_domains": [ 00:18:37.224 { 00:18:37.224 "dma_device_id": "system", 00:18:37.224 "dma_device_type": 1 00:18:37.224 }, 00:18:37.224 { 00:18:37.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.224 "dma_device_type": 2 00:18:37.224 }, 00:18:37.224 { 00:18:37.224 "dma_device_id": "system", 00:18:37.224 "dma_device_type": 1 00:18:37.224 }, 00:18:37.224 { 00:18:37.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.224 "dma_device_type": 2 00:18:37.224 } 00:18:37.224 ], 00:18:37.224 "driver_specific": { 00:18:37.224 "raid": { 00:18:37.224 "uuid": "14d77da0-4c4f-4854-a175-1328af615c76", 00:18:37.224 "strip_size_kb": 0, 00:18:37.224 "state": "online", 00:18:37.224 "raid_level": "raid1", 00:18:37.224 "superblock": true, 00:18:37.224 "num_base_bdevs": 2, 00:18:37.224 "num_base_bdevs_discovered": 2, 00:18:37.224 "num_base_bdevs_operational": 2, 00:18:37.224 "base_bdevs_list": [ 00:18:37.224 { 00:18:37.224 "name": "pt1", 00:18:37.224 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:37.224 "is_configured": true, 00:18:37.224 "data_offset": 256, 00:18:37.224 "data_size": 7936 00:18:37.224 }, 00:18:37.224 { 00:18:37.224 "name": "pt2", 00:18:37.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:37.224 "is_configured": true, 00:18:37.224 "data_offset": 256, 00:18:37.224 "data_size": 7936 00:18:37.224 } 00:18:37.224 ] 00:18:37.224 } 00:18:37.224 } 00:18:37.224 }' 00:18:37.224 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:37.484 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:37.484 pt2' 00:18:37.484 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:37.484 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:37.484 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:37.485 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:37.485 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:37.485 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.485 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.485 15:45:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.485 [2024-11-25 15:45:36.074600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 14d77da0-4c4f-4854-a175-1328af615c76 '!=' 14d77da0-4c4f-4854-a175-1328af615c76 ']' 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.485 [2024-11-25 15:45:36.118313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.485 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.745 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.745 "name": "raid_bdev1", 00:18:37.745 "uuid": "14d77da0-4c4f-4854-a175-1328af615c76", 00:18:37.745 "strip_size_kb": 0, 00:18:37.745 "state": "online", 00:18:37.745 "raid_level": "raid1", 00:18:37.745 "superblock": true, 00:18:37.745 "num_base_bdevs": 2, 00:18:37.745 "num_base_bdevs_discovered": 1, 00:18:37.745 "num_base_bdevs_operational": 1, 00:18:37.745 "base_bdevs_list": [ 00:18:37.745 { 00:18:37.745 "name": null, 00:18:37.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.745 "is_configured": false, 00:18:37.745 "data_offset": 0, 00:18:37.745 "data_size": 7936 00:18:37.745 }, 00:18:37.745 { 00:18:37.745 "name": "pt2", 00:18:37.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:37.745 "is_configured": true, 00:18:37.745 "data_offset": 256, 00:18:37.745 "data_size": 7936 00:18:37.745 } 00:18:37.745 ] 00:18:37.745 }' 00:18:37.745 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.745 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.005 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:38.005 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.005 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.005 [2024-11-25 15:45:36.565534] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.005 [2024-11-25 15:45:36.565598] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.005 [2024-11-25 15:45:36.565685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.005 [2024-11-25 15:45:36.565738] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.005 [2024-11-25 15:45:36.565772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:38.005 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.005 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.006 [2024-11-25 15:45:36.637423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:38.006 [2024-11-25 15:45:36.637468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.006 [2024-11-25 15:45:36.637482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:38.006 [2024-11-25 15:45:36.637491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.006 [2024-11-25 15:45:36.639332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.006 [2024-11-25 15:45:36.639417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:38.006 [2024-11-25 15:45:36.639465] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:38.006 [2024-11-25 15:45:36.639506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:38.006 [2024-11-25 15:45:36.639563] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:38.006 [2024-11-25 15:45:36.639574] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:38.006 [2024-11-25 15:45:36.639665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:38.006 [2024-11-25 15:45:36.639726] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:38.006 [2024-11-25 15:45:36.639733] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:38.006 [2024-11-25 15:45:36.639787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.006 pt2 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.006 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.266 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.266 "name": "raid_bdev1", 00:18:38.266 "uuid": "14d77da0-4c4f-4854-a175-1328af615c76", 00:18:38.266 "strip_size_kb": 0, 00:18:38.266 "state": "online", 00:18:38.266 "raid_level": "raid1", 00:18:38.266 "superblock": true, 00:18:38.266 "num_base_bdevs": 2, 00:18:38.266 "num_base_bdevs_discovered": 1, 00:18:38.266 "num_base_bdevs_operational": 1, 00:18:38.266 "base_bdevs_list": [ 00:18:38.266 { 00:18:38.266 "name": null, 00:18:38.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.266 "is_configured": false, 00:18:38.266 "data_offset": 256, 00:18:38.266 "data_size": 7936 00:18:38.266 }, 00:18:38.266 { 00:18:38.266 "name": "pt2", 00:18:38.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:38.266 "is_configured": true, 00:18:38.266 "data_offset": 256, 00:18:38.266 "data_size": 7936 00:18:38.266 } 00:18:38.266 ] 00:18:38.266 }' 00:18:38.266 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.266 15:45:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.528 [2024-11-25 15:45:37.068629] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.528 [2024-11-25 15:45:37.068693] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.528 [2024-11-25 15:45:37.068771] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.528 [2024-11-25 15:45:37.068822] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.528 [2024-11-25 15:45:37.068873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.528 [2024-11-25 15:45:37.132556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:38.528 [2024-11-25 15:45:37.132658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.528 [2024-11-25 15:45:37.132692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:38.528 [2024-11-25 15:45:37.132719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.528 [2024-11-25 15:45:37.134517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.528 [2024-11-25 15:45:37.134581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:38.528 [2024-11-25 15:45:37.134641] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:38.528 [2024-11-25 15:45:37.134692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:38.528 [2024-11-25 15:45:37.134790] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:38.528 [2024-11-25 15:45:37.134839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.528 [2024-11-25 15:45:37.134880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:38.528 [2024-11-25 15:45:37.134964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:38.528 [2024-11-25 15:45:37.135059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:38.528 [2024-11-25 15:45:37.135096] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:38.528 [2024-11-25 15:45:37.135168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:38.528 [2024-11-25 15:45:37.135253] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:38.528 [2024-11-25 15:45:37.135291] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:38.528 [2024-11-25 15:45:37.135391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.528 pt1 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.528 "name": "raid_bdev1", 00:18:38.528 "uuid": "14d77da0-4c4f-4854-a175-1328af615c76", 00:18:38.528 "strip_size_kb": 0, 00:18:38.528 "state": "online", 00:18:38.528 "raid_level": "raid1", 00:18:38.528 "superblock": true, 00:18:38.528 "num_base_bdevs": 2, 00:18:38.528 "num_base_bdevs_discovered": 1, 00:18:38.528 "num_base_bdevs_operational": 1, 00:18:38.528 "base_bdevs_list": [ 00:18:38.528 { 00:18:38.528 "name": null, 00:18:38.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.528 "is_configured": false, 00:18:38.528 "data_offset": 256, 00:18:38.528 "data_size": 7936 00:18:38.528 }, 00:18:38.528 { 00:18:38.528 "name": "pt2", 00:18:38.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:38.528 "is_configured": true, 00:18:38.528 "data_offset": 256, 00:18:38.528 "data_size": 7936 00:18:38.528 } 00:18:38.528 ] 00:18:38.528 }' 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.528 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.099 [2024-11-25 15:45:37.659918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 14d77da0-4c4f-4854-a175-1328af615c76 '!=' 14d77da0-4c4f-4854-a175-1328af615c76 ']' 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88309 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88309 ']' 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88309 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88309 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:39.099 killing process with pid 88309 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88309' 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88309 00:18:39.099 [2024-11-25 15:45:37.724811] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:39.099 [2024-11-25 15:45:37.724872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.099 [2024-11-25 15:45:37.724907] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:39.099 [2024-11-25 15:45:37.724919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:39.099 15:45:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88309 00:18:39.359 [2024-11-25 15:45:37.918411] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:40.301 15:45:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:40.301 ************************************ 00:18:40.301 END TEST raid_superblock_test_md_interleaved 00:18:40.301 ************************************ 00:18:40.301 00:18:40.301 real 0m5.900s 00:18:40.301 user 0m8.980s 00:18:40.301 sys 0m1.107s 00:18:40.301 15:45:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.301 15:45:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.561 15:45:39 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:40.561 15:45:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:40.561 15:45:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.561 15:45:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:40.561 ************************************ 00:18:40.561 START TEST raid_rebuild_test_sb_md_interleaved 00:18:40.561 ************************************ 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88632 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:40.561 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88632 00:18:40.562 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88632 ']' 00:18:40.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.562 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.562 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.562 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.562 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.562 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.562 [2024-11-25 15:45:39.125115] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:18:40.562 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:40.562 Zero copy mechanism will not be used. 00:18:40.562 [2024-11-25 15:45:39.125283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88632 ] 00:18:40.820 [2024-11-25 15:45:39.296593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.820 [2024-11-25 15:45:39.407189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.080 [2024-11-25 15:45:39.589885] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:41.080 [2024-11-25 15:45:39.589922] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:41.340 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.340 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:41.340 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:41.340 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:41.340 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.340 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.340 BaseBdev1_malloc 00:18:41.340 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.340 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:41.340 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.340 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.340 [2024-11-25 15:45:39.976643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:41.340 [2024-11-25 15:45:39.976700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.340 [2024-11-25 15:45:39.976719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:41.341 [2024-11-25 15:45:39.976730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.341 [2024-11-25 15:45:39.978488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.341 [2024-11-25 15:45:39.978527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:41.341 BaseBdev1 00:18:41.341 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.341 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:41.341 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:41.341 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.341 15:45:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.601 BaseBdev2_malloc 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.601 [2024-11-25 15:45:40.029678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:41.601 [2024-11-25 15:45:40.029737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.601 [2024-11-25 15:45:40.029756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:41.601 [2024-11-25 15:45:40.029766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.601 [2024-11-25 15:45:40.031523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.601 [2024-11-25 15:45:40.031559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:41.601 BaseBdev2 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.601 spare_malloc 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.601 spare_delay 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.601 [2024-11-25 15:45:40.131201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:41.601 [2024-11-25 15:45:40.131309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.601 [2024-11-25 15:45:40.131347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:41.601 [2024-11-25 15:45:40.131378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.601 [2024-11-25 15:45:40.133176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.601 [2024-11-25 15:45:40.133276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:41.601 spare 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.601 [2024-11-25 15:45:40.143224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:41.601 [2024-11-25 15:45:40.144956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:41.601 [2024-11-25 15:45:40.145151] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:41.601 [2024-11-25 15:45:40.145166] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:41.601 [2024-11-25 15:45:40.145239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:41.601 [2024-11-25 15:45:40.145304] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:41.601 [2024-11-25 15:45:40.145312] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:41.601 [2024-11-25 15:45:40.145377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.601 "name": "raid_bdev1", 00:18:41.601 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:41.601 "strip_size_kb": 0, 00:18:41.601 "state": "online", 00:18:41.601 "raid_level": "raid1", 00:18:41.601 "superblock": true, 00:18:41.601 "num_base_bdevs": 2, 00:18:41.601 "num_base_bdevs_discovered": 2, 00:18:41.601 "num_base_bdevs_operational": 2, 00:18:41.601 "base_bdevs_list": [ 00:18:41.601 { 00:18:41.601 "name": "BaseBdev1", 00:18:41.601 "uuid": "2b6b8a70-f41d-53f0-810e-c0a3029f7173", 00:18:41.601 "is_configured": true, 00:18:41.601 "data_offset": 256, 00:18:41.601 "data_size": 7936 00:18:41.601 }, 00:18:41.601 { 00:18:41.601 "name": "BaseBdev2", 00:18:41.601 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:41.601 "is_configured": true, 00:18:41.601 "data_offset": 256, 00:18:41.601 "data_size": 7936 00:18:41.601 } 00:18:41.601 ] 00:18:41.601 }' 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.601 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.171 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:42.171 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:42.171 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.171 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.171 [2024-11-25 15:45:40.646594] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:42.171 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.171 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:42.171 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.172 [2024-11-25 15:45:40.734191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.172 "name": "raid_bdev1", 00:18:42.172 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:42.172 "strip_size_kb": 0, 00:18:42.172 "state": "online", 00:18:42.172 "raid_level": "raid1", 00:18:42.172 "superblock": true, 00:18:42.172 "num_base_bdevs": 2, 00:18:42.172 "num_base_bdevs_discovered": 1, 00:18:42.172 "num_base_bdevs_operational": 1, 00:18:42.172 "base_bdevs_list": [ 00:18:42.172 { 00:18:42.172 "name": null, 00:18:42.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.172 "is_configured": false, 00:18:42.172 "data_offset": 0, 00:18:42.172 "data_size": 7936 00:18:42.172 }, 00:18:42.172 { 00:18:42.172 "name": "BaseBdev2", 00:18:42.172 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:42.172 "is_configured": true, 00:18:42.172 "data_offset": 256, 00:18:42.172 "data_size": 7936 00:18:42.172 } 00:18:42.172 ] 00:18:42.172 }' 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.172 15:45:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.740 15:45:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:42.740 15:45:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.740 15:45:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.740 [2024-11-25 15:45:41.181433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:42.740 [2024-11-25 15:45:41.196429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:42.740 15:45:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.740 15:45:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:42.740 [2024-11-25 15:45:41.198258] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:43.679 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.679 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.679 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.679 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.679 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.679 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.679 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.679 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.679 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.679 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.679 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.679 "name": "raid_bdev1", 00:18:43.679 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:43.679 "strip_size_kb": 0, 00:18:43.679 "state": "online", 00:18:43.679 "raid_level": "raid1", 00:18:43.679 "superblock": true, 00:18:43.679 "num_base_bdevs": 2, 00:18:43.679 "num_base_bdevs_discovered": 2, 00:18:43.679 "num_base_bdevs_operational": 2, 00:18:43.679 "process": { 00:18:43.679 "type": "rebuild", 00:18:43.679 "target": "spare", 00:18:43.679 "progress": { 00:18:43.679 "blocks": 2560, 00:18:43.679 "percent": 32 00:18:43.679 } 00:18:43.679 }, 00:18:43.679 "base_bdevs_list": [ 00:18:43.679 { 00:18:43.679 "name": "spare", 00:18:43.679 "uuid": "03d9c7d5-12a8-5cfa-a53e-abc00a2962c0", 00:18:43.679 "is_configured": true, 00:18:43.679 "data_offset": 256, 00:18:43.679 "data_size": 7936 00:18:43.679 }, 00:18:43.679 { 00:18:43.679 "name": "BaseBdev2", 00:18:43.679 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:43.679 "is_configured": true, 00:18:43.679 "data_offset": 256, 00:18:43.679 "data_size": 7936 00:18:43.679 } 00:18:43.679 ] 00:18:43.679 }' 00:18:43.679 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.679 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.679 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.679 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.679 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:43.679 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.679 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.679 [2024-11-25 15:45:42.345938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.939 [2024-11-25 15:45:42.402845] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:43.939 [2024-11-25 15:45:42.402898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.939 [2024-11-25 15:45:42.402912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.939 [2024-11-25 15:45:42.402923] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:43.939 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.939 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:43.939 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.939 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.939 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.939 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.939 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:43.939 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.939 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.939 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.939 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.939 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.939 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.939 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.939 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.939 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.939 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.939 "name": "raid_bdev1", 00:18:43.939 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:43.939 "strip_size_kb": 0, 00:18:43.939 "state": "online", 00:18:43.939 "raid_level": "raid1", 00:18:43.939 "superblock": true, 00:18:43.939 "num_base_bdevs": 2, 00:18:43.939 "num_base_bdevs_discovered": 1, 00:18:43.939 "num_base_bdevs_operational": 1, 00:18:43.939 "base_bdevs_list": [ 00:18:43.939 { 00:18:43.939 "name": null, 00:18:43.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.939 "is_configured": false, 00:18:43.939 "data_offset": 0, 00:18:43.939 "data_size": 7936 00:18:43.939 }, 00:18:43.939 { 00:18:43.939 "name": "BaseBdev2", 00:18:43.939 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:43.939 "is_configured": true, 00:18:43.939 "data_offset": 256, 00:18:43.939 "data_size": 7936 00:18:43.939 } 00:18:43.939 ] 00:18:43.939 }' 00:18:43.939 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.939 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.509 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:44.509 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.509 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:44.509 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:44.509 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.509 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.509 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.509 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.509 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.509 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.509 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.509 "name": "raid_bdev1", 00:18:44.509 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:44.509 "strip_size_kb": 0, 00:18:44.509 "state": "online", 00:18:44.509 "raid_level": "raid1", 00:18:44.509 "superblock": true, 00:18:44.509 "num_base_bdevs": 2, 00:18:44.509 "num_base_bdevs_discovered": 1, 00:18:44.509 "num_base_bdevs_operational": 1, 00:18:44.509 "base_bdevs_list": [ 00:18:44.509 { 00:18:44.509 "name": null, 00:18:44.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.509 "is_configured": false, 00:18:44.509 "data_offset": 0, 00:18:44.509 "data_size": 7936 00:18:44.509 }, 00:18:44.509 { 00:18:44.509 "name": "BaseBdev2", 00:18:44.509 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:44.509 "is_configured": true, 00:18:44.509 "data_offset": 256, 00:18:44.509 "data_size": 7936 00:18:44.509 } 00:18:44.509 ] 00:18:44.509 }' 00:18:44.509 15:45:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.509 15:45:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:44.509 15:45:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.510 15:45:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:44.510 15:45:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:44.510 15:45:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.510 15:45:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.510 [2024-11-25 15:45:43.079304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:44.510 [2024-11-25 15:45:43.094564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:44.510 15:45:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.510 15:45:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:44.510 [2024-11-25 15:45:43.096328] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:45.449 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.449 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.449 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.449 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.449 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.449 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.449 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.449 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.449 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.450 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.710 "name": "raid_bdev1", 00:18:45.710 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:45.710 "strip_size_kb": 0, 00:18:45.710 "state": "online", 00:18:45.710 "raid_level": "raid1", 00:18:45.710 "superblock": true, 00:18:45.710 "num_base_bdevs": 2, 00:18:45.710 "num_base_bdevs_discovered": 2, 00:18:45.710 "num_base_bdevs_operational": 2, 00:18:45.710 "process": { 00:18:45.710 "type": "rebuild", 00:18:45.710 "target": "spare", 00:18:45.710 "progress": { 00:18:45.710 "blocks": 2560, 00:18:45.710 "percent": 32 00:18:45.710 } 00:18:45.710 }, 00:18:45.710 "base_bdevs_list": [ 00:18:45.710 { 00:18:45.710 "name": "spare", 00:18:45.710 "uuid": "03d9c7d5-12a8-5cfa-a53e-abc00a2962c0", 00:18:45.710 "is_configured": true, 00:18:45.710 "data_offset": 256, 00:18:45.710 "data_size": 7936 00:18:45.710 }, 00:18:45.710 { 00:18:45.710 "name": "BaseBdev2", 00:18:45.710 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:45.710 "is_configured": true, 00:18:45.710 "data_offset": 256, 00:18:45.710 "data_size": 7936 00:18:45.710 } 00:18:45.710 ] 00:18:45.710 }' 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:45.710 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=717 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.710 "name": "raid_bdev1", 00:18:45.710 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:45.710 "strip_size_kb": 0, 00:18:45.710 "state": "online", 00:18:45.710 "raid_level": "raid1", 00:18:45.710 "superblock": true, 00:18:45.710 "num_base_bdevs": 2, 00:18:45.710 "num_base_bdevs_discovered": 2, 00:18:45.710 "num_base_bdevs_operational": 2, 00:18:45.710 "process": { 00:18:45.710 "type": "rebuild", 00:18:45.710 "target": "spare", 00:18:45.710 "progress": { 00:18:45.710 "blocks": 2816, 00:18:45.710 "percent": 35 00:18:45.710 } 00:18:45.710 }, 00:18:45.710 "base_bdevs_list": [ 00:18:45.710 { 00:18:45.710 "name": "spare", 00:18:45.710 "uuid": "03d9c7d5-12a8-5cfa-a53e-abc00a2962c0", 00:18:45.710 "is_configured": true, 00:18:45.710 "data_offset": 256, 00:18:45.710 "data_size": 7936 00:18:45.710 }, 00:18:45.710 { 00:18:45.710 "name": "BaseBdev2", 00:18:45.710 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:45.710 "is_configured": true, 00:18:45.710 "data_offset": 256, 00:18:45.710 "data_size": 7936 00:18:45.710 } 00:18:45.710 ] 00:18:45.710 }' 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.710 15:45:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:47.093 15:45:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:47.093 15:45:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.093 15:45:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.093 15:45:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:47.093 15:45:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:47.093 15:45:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.093 15:45:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.093 15:45:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.093 15:45:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.093 15:45:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.093 15:45:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.093 15:45:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.093 "name": "raid_bdev1", 00:18:47.093 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:47.093 "strip_size_kb": 0, 00:18:47.093 "state": "online", 00:18:47.093 "raid_level": "raid1", 00:18:47.093 "superblock": true, 00:18:47.093 "num_base_bdevs": 2, 00:18:47.093 "num_base_bdevs_discovered": 2, 00:18:47.093 "num_base_bdevs_operational": 2, 00:18:47.093 "process": { 00:18:47.093 "type": "rebuild", 00:18:47.093 "target": "spare", 00:18:47.093 "progress": { 00:18:47.093 "blocks": 5632, 00:18:47.093 "percent": 70 00:18:47.093 } 00:18:47.093 }, 00:18:47.093 "base_bdevs_list": [ 00:18:47.093 { 00:18:47.093 "name": "spare", 00:18:47.093 "uuid": "03d9c7d5-12a8-5cfa-a53e-abc00a2962c0", 00:18:47.093 "is_configured": true, 00:18:47.093 "data_offset": 256, 00:18:47.093 "data_size": 7936 00:18:47.093 }, 00:18:47.093 { 00:18:47.093 "name": "BaseBdev2", 00:18:47.093 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:47.093 "is_configured": true, 00:18:47.093 "data_offset": 256, 00:18:47.093 "data_size": 7936 00:18:47.093 } 00:18:47.093 ] 00:18:47.094 }' 00:18:47.094 15:45:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.094 15:45:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:47.094 15:45:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.094 15:45:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.094 15:45:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:47.664 [2024-11-25 15:45:46.207699] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:47.664 [2024-11-25 15:45:46.207808] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:47.664 [2024-11-25 15:45:46.207904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.924 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:47.924 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.924 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.924 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:47.924 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:47.924 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.924 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.924 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.924 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.924 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.924 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.924 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.924 "name": "raid_bdev1", 00:18:47.924 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:47.924 "strip_size_kb": 0, 00:18:47.924 "state": "online", 00:18:47.924 "raid_level": "raid1", 00:18:47.924 "superblock": true, 00:18:47.924 "num_base_bdevs": 2, 00:18:47.924 "num_base_bdevs_discovered": 2, 00:18:47.924 "num_base_bdevs_operational": 2, 00:18:47.924 "base_bdevs_list": [ 00:18:47.924 { 00:18:47.924 "name": "spare", 00:18:47.924 "uuid": "03d9c7d5-12a8-5cfa-a53e-abc00a2962c0", 00:18:47.924 "is_configured": true, 00:18:47.924 "data_offset": 256, 00:18:47.924 "data_size": 7936 00:18:47.924 }, 00:18:47.924 { 00:18:47.924 "name": "BaseBdev2", 00:18:47.924 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:47.924 "is_configured": true, 00:18:47.924 "data_offset": 256, 00:18:47.924 "data_size": 7936 00:18:47.924 } 00:18:47.924 ] 00:18:47.924 }' 00:18:47.924 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.184 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:48.184 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.184 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:48.184 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:48.184 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:48.184 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.184 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:48.184 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:48.184 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.184 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.184 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.184 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.184 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.184 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.184 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.184 "name": "raid_bdev1", 00:18:48.184 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:48.184 "strip_size_kb": 0, 00:18:48.184 "state": "online", 00:18:48.184 "raid_level": "raid1", 00:18:48.184 "superblock": true, 00:18:48.184 "num_base_bdevs": 2, 00:18:48.184 "num_base_bdevs_discovered": 2, 00:18:48.184 "num_base_bdevs_operational": 2, 00:18:48.185 "base_bdevs_list": [ 00:18:48.185 { 00:18:48.185 "name": "spare", 00:18:48.185 "uuid": "03d9c7d5-12a8-5cfa-a53e-abc00a2962c0", 00:18:48.185 "is_configured": true, 00:18:48.185 "data_offset": 256, 00:18:48.185 "data_size": 7936 00:18:48.185 }, 00:18:48.185 { 00:18:48.185 "name": "BaseBdev2", 00:18:48.185 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:48.185 "is_configured": true, 00:18:48.185 "data_offset": 256, 00:18:48.185 "data_size": 7936 00:18:48.185 } 00:18:48.185 ] 00:18:48.185 }' 00:18:48.185 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.185 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:48.185 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.185 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:48.185 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:48.185 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.185 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.185 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.185 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.185 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:48.185 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.185 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.185 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.185 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.185 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.185 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.185 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.185 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.185 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.445 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.445 "name": "raid_bdev1", 00:18:48.445 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:48.445 "strip_size_kb": 0, 00:18:48.445 "state": "online", 00:18:48.445 "raid_level": "raid1", 00:18:48.445 "superblock": true, 00:18:48.445 "num_base_bdevs": 2, 00:18:48.445 "num_base_bdevs_discovered": 2, 00:18:48.445 "num_base_bdevs_operational": 2, 00:18:48.445 "base_bdevs_list": [ 00:18:48.445 { 00:18:48.445 "name": "spare", 00:18:48.445 "uuid": "03d9c7d5-12a8-5cfa-a53e-abc00a2962c0", 00:18:48.445 "is_configured": true, 00:18:48.445 "data_offset": 256, 00:18:48.445 "data_size": 7936 00:18:48.445 }, 00:18:48.445 { 00:18:48.445 "name": "BaseBdev2", 00:18:48.445 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:48.445 "is_configured": true, 00:18:48.445 "data_offset": 256, 00:18:48.445 "data_size": 7936 00:18:48.445 } 00:18:48.445 ] 00:18:48.445 }' 00:18:48.445 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.445 15:45:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.706 [2024-11-25 15:45:47.298781] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:48.706 [2024-11-25 15:45:47.298856] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:48.706 [2024-11-25 15:45:47.298966] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:48.706 [2024-11-25 15:45:47.299053] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:48.706 [2024-11-25 15:45:47.299114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.706 [2024-11-25 15:45:47.374643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:48.706 [2024-11-25 15:45:47.374689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.706 [2024-11-25 15:45:47.374709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:48.706 [2024-11-25 15:45:47.374717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.706 [2024-11-25 15:45:47.376624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.706 [2024-11-25 15:45:47.376664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:48.706 [2024-11-25 15:45:47.376717] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:48.706 [2024-11-25 15:45:47.376770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:48.706 [2024-11-25 15:45:47.376869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:48.706 spare 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.706 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.966 [2024-11-25 15:45:47.476769] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:48.966 [2024-11-25 15:45:47.476839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:48.966 [2024-11-25 15:45:47.476927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:48.966 [2024-11-25 15:45:47.477015] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:48.966 [2024-11-25 15:45:47.477032] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:48.966 [2024-11-25 15:45:47.477108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.966 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.966 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:48.966 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.966 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.966 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.966 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.966 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:48.966 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.966 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.966 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.966 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.966 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.966 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.966 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.966 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.966 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.966 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.966 "name": "raid_bdev1", 00:18:48.966 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:48.966 "strip_size_kb": 0, 00:18:48.966 "state": "online", 00:18:48.966 "raid_level": "raid1", 00:18:48.966 "superblock": true, 00:18:48.966 "num_base_bdevs": 2, 00:18:48.966 "num_base_bdevs_discovered": 2, 00:18:48.966 "num_base_bdevs_operational": 2, 00:18:48.966 "base_bdevs_list": [ 00:18:48.966 { 00:18:48.966 "name": "spare", 00:18:48.966 "uuid": "03d9c7d5-12a8-5cfa-a53e-abc00a2962c0", 00:18:48.966 "is_configured": true, 00:18:48.966 "data_offset": 256, 00:18:48.966 "data_size": 7936 00:18:48.966 }, 00:18:48.966 { 00:18:48.966 "name": "BaseBdev2", 00:18:48.966 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:48.966 "is_configured": true, 00:18:48.966 "data_offset": 256, 00:18:48.966 "data_size": 7936 00:18:48.966 } 00:18:48.966 ] 00:18:48.966 }' 00:18:48.966 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.966 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.537 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:49.537 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.537 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:49.537 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:49.537 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.537 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.537 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.537 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.537 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.537 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.537 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.537 "name": "raid_bdev1", 00:18:49.537 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:49.537 "strip_size_kb": 0, 00:18:49.537 "state": "online", 00:18:49.537 "raid_level": "raid1", 00:18:49.537 "superblock": true, 00:18:49.537 "num_base_bdevs": 2, 00:18:49.537 "num_base_bdevs_discovered": 2, 00:18:49.537 "num_base_bdevs_operational": 2, 00:18:49.537 "base_bdevs_list": [ 00:18:49.537 { 00:18:49.537 "name": "spare", 00:18:49.537 "uuid": "03d9c7d5-12a8-5cfa-a53e-abc00a2962c0", 00:18:49.537 "is_configured": true, 00:18:49.537 "data_offset": 256, 00:18:49.537 "data_size": 7936 00:18:49.537 }, 00:18:49.537 { 00:18:49.537 "name": "BaseBdev2", 00:18:49.537 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:49.537 "is_configured": true, 00:18:49.537 "data_offset": 256, 00:18:49.537 "data_size": 7936 00:18:49.537 } 00:18:49.537 ] 00:18:49.537 }' 00:18:49.537 15:45:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.537 [2024-11-25 15:45:48.125449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.537 "name": "raid_bdev1", 00:18:49.537 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:49.537 "strip_size_kb": 0, 00:18:49.537 "state": "online", 00:18:49.537 "raid_level": "raid1", 00:18:49.537 "superblock": true, 00:18:49.537 "num_base_bdevs": 2, 00:18:49.537 "num_base_bdevs_discovered": 1, 00:18:49.537 "num_base_bdevs_operational": 1, 00:18:49.537 "base_bdevs_list": [ 00:18:49.537 { 00:18:49.537 "name": null, 00:18:49.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.537 "is_configured": false, 00:18:49.537 "data_offset": 0, 00:18:49.537 "data_size": 7936 00:18:49.537 }, 00:18:49.537 { 00:18:49.537 "name": "BaseBdev2", 00:18:49.537 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:49.537 "is_configured": true, 00:18:49.537 "data_offset": 256, 00:18:49.537 "data_size": 7936 00:18:49.537 } 00:18:49.537 ] 00:18:49.537 }' 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.537 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.108 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:50.108 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.108 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.108 [2024-11-25 15:45:48.588679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.108 [2024-11-25 15:45:48.588821] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:50.108 [2024-11-25 15:45:48.588836] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:50.108 [2024-11-25 15:45:48.588869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.108 [2024-11-25 15:45:48.604106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:50.108 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.108 15:45:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:50.108 [2024-11-25 15:45:48.605836] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:51.068 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.068 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.068 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.068 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.068 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.068 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.068 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.068 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.068 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.068 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.068 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.068 "name": "raid_bdev1", 00:18:51.068 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:51.068 "strip_size_kb": 0, 00:18:51.068 "state": "online", 00:18:51.068 "raid_level": "raid1", 00:18:51.068 "superblock": true, 00:18:51.068 "num_base_bdevs": 2, 00:18:51.068 "num_base_bdevs_discovered": 2, 00:18:51.068 "num_base_bdevs_operational": 2, 00:18:51.068 "process": { 00:18:51.068 "type": "rebuild", 00:18:51.068 "target": "spare", 00:18:51.068 "progress": { 00:18:51.068 "blocks": 2560, 00:18:51.068 "percent": 32 00:18:51.068 } 00:18:51.068 }, 00:18:51.068 "base_bdevs_list": [ 00:18:51.068 { 00:18:51.068 "name": "spare", 00:18:51.068 "uuid": "03d9c7d5-12a8-5cfa-a53e-abc00a2962c0", 00:18:51.068 "is_configured": true, 00:18:51.068 "data_offset": 256, 00:18:51.068 "data_size": 7936 00:18:51.068 }, 00:18:51.068 { 00:18:51.068 "name": "BaseBdev2", 00:18:51.068 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:51.068 "is_configured": true, 00:18:51.068 "data_offset": 256, 00:18:51.068 "data_size": 7936 00:18:51.068 } 00:18:51.068 ] 00:18:51.068 }' 00:18:51.068 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.068 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.068 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.328 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.329 [2024-11-25 15:45:49.765408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.329 [2024-11-25 15:45:49.810328] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:51.329 [2024-11-25 15:45:49.810424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.329 [2024-11-25 15:45:49.810438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.329 [2024-11-25 15:45:49.810447] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.329 "name": "raid_bdev1", 00:18:51.329 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:51.329 "strip_size_kb": 0, 00:18:51.329 "state": "online", 00:18:51.329 "raid_level": "raid1", 00:18:51.329 "superblock": true, 00:18:51.329 "num_base_bdevs": 2, 00:18:51.329 "num_base_bdevs_discovered": 1, 00:18:51.329 "num_base_bdevs_operational": 1, 00:18:51.329 "base_bdevs_list": [ 00:18:51.329 { 00:18:51.329 "name": null, 00:18:51.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.329 "is_configured": false, 00:18:51.329 "data_offset": 0, 00:18:51.329 "data_size": 7936 00:18:51.329 }, 00:18:51.329 { 00:18:51.329 "name": "BaseBdev2", 00:18:51.329 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:51.329 "is_configured": true, 00:18:51.329 "data_offset": 256, 00:18:51.329 "data_size": 7936 00:18:51.329 } 00:18:51.329 ] 00:18:51.329 }' 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.329 15:45:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.899 15:45:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:51.899 15:45:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.899 15:45:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.899 [2024-11-25 15:45:50.285549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:51.899 [2024-11-25 15:45:50.285661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.899 [2024-11-25 15:45:50.285699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:51.899 [2024-11-25 15:45:50.285729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.899 [2024-11-25 15:45:50.285916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.899 [2024-11-25 15:45:50.285966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:51.899 [2024-11-25 15:45:50.286046] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:51.899 [2024-11-25 15:45:50.286085] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:51.899 [2024-11-25 15:45:50.286122] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:51.899 [2024-11-25 15:45:50.286167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:51.899 [2024-11-25 15:45:50.301442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:51.899 spare 00:18:51.899 15:45:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.899 [2024-11-25 15:45:50.303236] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:51.899 15:45:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:52.840 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.840 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.840 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.840 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.840 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.840 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.840 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.840 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.840 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.840 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.840 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.840 "name": "raid_bdev1", 00:18:52.840 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:52.840 "strip_size_kb": 0, 00:18:52.840 "state": "online", 00:18:52.840 "raid_level": "raid1", 00:18:52.840 "superblock": true, 00:18:52.840 "num_base_bdevs": 2, 00:18:52.840 "num_base_bdevs_discovered": 2, 00:18:52.840 "num_base_bdevs_operational": 2, 00:18:52.840 "process": { 00:18:52.840 "type": "rebuild", 00:18:52.840 "target": "spare", 00:18:52.840 "progress": { 00:18:52.840 "blocks": 2560, 00:18:52.840 "percent": 32 00:18:52.840 } 00:18:52.840 }, 00:18:52.840 "base_bdevs_list": [ 00:18:52.840 { 00:18:52.840 "name": "spare", 00:18:52.840 "uuid": "03d9c7d5-12a8-5cfa-a53e-abc00a2962c0", 00:18:52.840 "is_configured": true, 00:18:52.840 "data_offset": 256, 00:18:52.840 "data_size": 7936 00:18:52.840 }, 00:18:52.840 { 00:18:52.840 "name": "BaseBdev2", 00:18:52.840 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:52.840 "is_configured": true, 00:18:52.840 "data_offset": 256, 00:18:52.840 "data_size": 7936 00:18:52.840 } 00:18:52.840 ] 00:18:52.840 }' 00:18:52.840 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.840 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.840 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.840 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.840 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:52.840 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.840 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.840 [2024-11-25 15:45:51.463283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:52.840 [2024-11-25 15:45:51.507656] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:52.840 [2024-11-25 15:45:51.507707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.840 [2024-11-25 15:45:51.507722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:52.840 [2024-11-25 15:45:51.507728] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:53.101 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.101 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:53.101 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.101 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.101 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:53.101 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:53.101 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:53.101 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.101 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.101 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.101 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.101 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.101 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.101 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.101 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.101 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.101 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.101 "name": "raid_bdev1", 00:18:53.101 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:53.101 "strip_size_kb": 0, 00:18:53.101 "state": "online", 00:18:53.101 "raid_level": "raid1", 00:18:53.101 "superblock": true, 00:18:53.101 "num_base_bdevs": 2, 00:18:53.101 "num_base_bdevs_discovered": 1, 00:18:53.101 "num_base_bdevs_operational": 1, 00:18:53.101 "base_bdevs_list": [ 00:18:53.101 { 00:18:53.101 "name": null, 00:18:53.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.101 "is_configured": false, 00:18:53.101 "data_offset": 0, 00:18:53.101 "data_size": 7936 00:18:53.101 }, 00:18:53.101 { 00:18:53.101 "name": "BaseBdev2", 00:18:53.101 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:53.101 "is_configured": true, 00:18:53.101 "data_offset": 256, 00:18:53.101 "data_size": 7936 00:18:53.101 } 00:18:53.101 ] 00:18:53.101 }' 00:18:53.101 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.101 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.361 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:53.361 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.361 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:53.361 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:53.361 15:45:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.361 15:45:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.361 15:45:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.361 15:45:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.361 15:45:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.361 15:45:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.361 15:45:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.361 "name": "raid_bdev1", 00:18:53.361 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:53.361 "strip_size_kb": 0, 00:18:53.361 "state": "online", 00:18:53.361 "raid_level": "raid1", 00:18:53.361 "superblock": true, 00:18:53.361 "num_base_bdevs": 2, 00:18:53.361 "num_base_bdevs_discovered": 1, 00:18:53.361 "num_base_bdevs_operational": 1, 00:18:53.361 "base_bdevs_list": [ 00:18:53.361 { 00:18:53.361 "name": null, 00:18:53.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.361 "is_configured": false, 00:18:53.361 "data_offset": 0, 00:18:53.361 "data_size": 7936 00:18:53.361 }, 00:18:53.361 { 00:18:53.361 "name": "BaseBdev2", 00:18:53.361 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:53.361 "is_configured": true, 00:18:53.361 "data_offset": 256, 00:18:53.361 "data_size": 7936 00:18:53.361 } 00:18:53.361 ] 00:18:53.361 }' 00:18:53.622 15:45:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.622 15:45:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:53.622 15:45:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.622 15:45:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:53.622 15:45:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:53.622 15:45:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.622 15:45:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.622 15:45:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.622 15:45:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:53.622 15:45:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.622 15:45:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.622 [2024-11-25 15:45:52.143541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:53.622 [2024-11-25 15:45:52.143656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.622 [2024-11-25 15:45:52.143682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:53.622 [2024-11-25 15:45:52.143692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.622 [2024-11-25 15:45:52.143836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.622 [2024-11-25 15:45:52.143849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:53.622 [2024-11-25 15:45:52.143897] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:53.622 [2024-11-25 15:45:52.143910] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:53.622 [2024-11-25 15:45:52.143920] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:53.622 [2024-11-25 15:45:52.143929] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:53.622 BaseBdev1 00:18:53.622 15:45:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.622 15:45:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:54.594 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:54.594 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.594 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.594 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.594 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.594 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:54.594 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.594 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.594 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.594 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.594 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.594 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.594 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.594 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.594 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.594 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.594 "name": "raid_bdev1", 00:18:54.594 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:54.594 "strip_size_kb": 0, 00:18:54.594 "state": "online", 00:18:54.594 "raid_level": "raid1", 00:18:54.594 "superblock": true, 00:18:54.594 "num_base_bdevs": 2, 00:18:54.594 "num_base_bdevs_discovered": 1, 00:18:54.594 "num_base_bdevs_operational": 1, 00:18:54.594 "base_bdevs_list": [ 00:18:54.594 { 00:18:54.594 "name": null, 00:18:54.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.594 "is_configured": false, 00:18:54.594 "data_offset": 0, 00:18:54.594 "data_size": 7936 00:18:54.594 }, 00:18:54.594 { 00:18:54.594 "name": "BaseBdev2", 00:18:54.594 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:54.594 "is_configured": true, 00:18:54.594 "data_offset": 256, 00:18:54.594 "data_size": 7936 00:18:54.594 } 00:18:54.594 ] 00:18:54.594 }' 00:18:54.594 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.594 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.164 "name": "raid_bdev1", 00:18:55.164 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:55.164 "strip_size_kb": 0, 00:18:55.164 "state": "online", 00:18:55.164 "raid_level": "raid1", 00:18:55.164 "superblock": true, 00:18:55.164 "num_base_bdevs": 2, 00:18:55.164 "num_base_bdevs_discovered": 1, 00:18:55.164 "num_base_bdevs_operational": 1, 00:18:55.164 "base_bdevs_list": [ 00:18:55.164 { 00:18:55.164 "name": null, 00:18:55.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.164 "is_configured": false, 00:18:55.164 "data_offset": 0, 00:18:55.164 "data_size": 7936 00:18:55.164 }, 00:18:55.164 { 00:18:55.164 "name": "BaseBdev2", 00:18:55.164 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:55.164 "is_configured": true, 00:18:55.164 "data_offset": 256, 00:18:55.164 "data_size": 7936 00:18:55.164 } 00:18:55.164 ] 00:18:55.164 }' 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.164 [2024-11-25 15:45:53.748823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:55.164 [2024-11-25 15:45:53.749033] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:55.164 [2024-11-25 15:45:53.749092] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:55.164 request: 00:18:55.164 { 00:18:55.164 "base_bdev": "BaseBdev1", 00:18:55.164 "raid_bdev": "raid_bdev1", 00:18:55.164 "method": "bdev_raid_add_base_bdev", 00:18:55.164 "req_id": 1 00:18:55.164 } 00:18:55.164 Got JSON-RPC error response 00:18:55.164 response: 00:18:55.164 { 00:18:55.164 "code": -22, 00:18:55.164 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:55.164 } 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.164 15:45:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:56.105 15:45:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:56.105 15:45:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.105 15:45:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.105 15:45:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.105 15:45:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.105 15:45:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:56.105 15:45:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.105 15:45:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.105 15:45:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.105 15:45:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.105 15:45:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.105 15:45:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.105 15:45:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.105 15:45:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.365 15:45:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.365 15:45:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.365 "name": "raid_bdev1", 00:18:56.365 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:56.365 "strip_size_kb": 0, 00:18:56.365 "state": "online", 00:18:56.365 "raid_level": "raid1", 00:18:56.365 "superblock": true, 00:18:56.365 "num_base_bdevs": 2, 00:18:56.365 "num_base_bdevs_discovered": 1, 00:18:56.365 "num_base_bdevs_operational": 1, 00:18:56.365 "base_bdevs_list": [ 00:18:56.365 { 00:18:56.365 "name": null, 00:18:56.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.365 "is_configured": false, 00:18:56.365 "data_offset": 0, 00:18:56.365 "data_size": 7936 00:18:56.365 }, 00:18:56.365 { 00:18:56.365 "name": "BaseBdev2", 00:18:56.365 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:56.365 "is_configured": true, 00:18:56.365 "data_offset": 256, 00:18:56.365 "data_size": 7936 00:18:56.365 } 00:18:56.365 ] 00:18:56.365 }' 00:18:56.365 15:45:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.365 15:45:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.625 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:56.625 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.625 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:56.625 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:56.625 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.625 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.625 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.625 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.625 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.625 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.885 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.885 "name": "raid_bdev1", 00:18:56.885 "uuid": "984e2e34-af0c-4c59-9cd8-ae03f7f150d2", 00:18:56.885 "strip_size_kb": 0, 00:18:56.885 "state": "online", 00:18:56.885 "raid_level": "raid1", 00:18:56.885 "superblock": true, 00:18:56.885 "num_base_bdevs": 2, 00:18:56.885 "num_base_bdevs_discovered": 1, 00:18:56.885 "num_base_bdevs_operational": 1, 00:18:56.885 "base_bdevs_list": [ 00:18:56.885 { 00:18:56.885 "name": null, 00:18:56.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.885 "is_configured": false, 00:18:56.885 "data_offset": 0, 00:18:56.885 "data_size": 7936 00:18:56.885 }, 00:18:56.885 { 00:18:56.885 "name": "BaseBdev2", 00:18:56.885 "uuid": "cc2ae8a8-0c7e-5abd-b677-41c145f218cc", 00:18:56.885 "is_configured": true, 00:18:56.885 "data_offset": 256, 00:18:56.885 "data_size": 7936 00:18:56.885 } 00:18:56.885 ] 00:18:56.885 }' 00:18:56.885 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.885 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:56.885 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.885 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:56.885 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88632 00:18:56.885 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88632 ']' 00:18:56.885 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88632 00:18:56.885 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:56.885 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.885 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88632 00:18:56.885 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:56.885 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:56.885 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88632' 00:18:56.885 killing process with pid 88632 00:18:56.885 Received shutdown signal, test time was about 60.000000 seconds 00:18:56.885 00:18:56.885 Latency(us) 00:18:56.885 [2024-11-25T15:45:55.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.885 [2024-11-25T15:45:55.566Z] =================================================================================================================== 00:18:56.885 [2024-11-25T15:45:55.566Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:56.885 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88632 00:18:56.885 [2024-11-25 15:45:55.455564] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:56.885 [2024-11-25 15:45:55.455670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:56.885 [2024-11-25 15:45:55.455710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:56.885 [2024-11-25 15:45:55.455721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:56.885 15:45:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88632 00:18:57.146 [2024-11-25 15:45:55.736991] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:58.094 15:45:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:58.094 00:18:58.094 real 0m17.723s 00:18:58.094 user 0m23.427s 00:18:58.094 sys 0m1.742s 00:18:58.094 15:45:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.094 ************************************ 00:18:58.094 END TEST raid_rebuild_test_sb_md_interleaved 00:18:58.094 ************************************ 00:18:58.094 15:45:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.380 15:45:56 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:58.380 15:45:56 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:58.380 15:45:56 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88632 ']' 00:18:58.380 15:45:56 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88632 00:18:58.380 15:45:56 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:58.380 00:18:58.380 real 11m38.907s 00:18:58.380 user 15m49.436s 00:18:58.380 sys 1m47.368s 00:18:58.380 15:45:56 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.380 15:45:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.380 ************************************ 00:18:58.380 END TEST bdev_raid 00:18:58.380 ************************************ 00:18:58.380 15:45:56 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:58.380 15:45:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:58.380 15:45:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.380 15:45:56 -- common/autotest_common.sh@10 -- # set +x 00:18:58.380 ************************************ 00:18:58.380 START TEST spdkcli_raid 00:18:58.380 ************************************ 00:18:58.380 15:45:56 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:58.380 * Looking for test storage... 00:18:58.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:58.380 15:45:57 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:58.380 15:45:57 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:18:58.381 15:45:57 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:58.656 15:45:57 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.656 15:45:57 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:58.656 15:45:57 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.656 15:45:57 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:58.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.656 --rc genhtml_branch_coverage=1 00:18:58.656 --rc genhtml_function_coverage=1 00:18:58.656 --rc genhtml_legend=1 00:18:58.656 --rc geninfo_all_blocks=1 00:18:58.656 --rc geninfo_unexecuted_blocks=1 00:18:58.656 00:18:58.656 ' 00:18:58.656 15:45:57 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:58.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.656 --rc genhtml_branch_coverage=1 00:18:58.656 --rc genhtml_function_coverage=1 00:18:58.656 --rc genhtml_legend=1 00:18:58.656 --rc geninfo_all_blocks=1 00:18:58.656 --rc geninfo_unexecuted_blocks=1 00:18:58.656 00:18:58.656 ' 00:18:58.656 15:45:57 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:58.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.656 --rc genhtml_branch_coverage=1 00:18:58.656 --rc genhtml_function_coverage=1 00:18:58.656 --rc genhtml_legend=1 00:18:58.656 --rc geninfo_all_blocks=1 00:18:58.656 --rc geninfo_unexecuted_blocks=1 00:18:58.656 00:18:58.656 ' 00:18:58.656 15:45:57 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:58.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.656 --rc genhtml_branch_coverage=1 00:18:58.656 --rc genhtml_function_coverage=1 00:18:58.656 --rc genhtml_legend=1 00:18:58.656 --rc geninfo_all_blocks=1 00:18:58.656 --rc geninfo_unexecuted_blocks=1 00:18:58.656 00:18:58.656 ' 00:18:58.656 15:45:57 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:58.656 15:45:57 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:58.656 15:45:57 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:58.656 15:45:57 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:58.656 15:45:57 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:58.656 15:45:57 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:58.656 15:45:57 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:58.656 15:45:57 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:58.656 15:45:57 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:58.656 15:45:57 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:58.656 15:45:57 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:58.656 15:45:57 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:58.656 15:45:57 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:58.656 15:45:57 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:58.656 15:45:57 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:58.656 15:45:57 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:58.656 15:45:57 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:58.656 15:45:57 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:58.656 15:45:57 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:58.656 15:45:57 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:58.656 15:45:57 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:58.656 15:45:57 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:58.656 15:45:57 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:58.656 15:45:57 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:58.656 15:45:57 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:58.656 15:45:57 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:58.656 15:45:57 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:58.656 15:45:57 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:58.656 15:45:57 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:58.656 15:45:57 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:58.656 15:45:57 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:58.657 15:45:57 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:58.657 15:45:57 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:58.657 15:45:57 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:58.657 15:45:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.657 15:45:57 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:58.657 15:45:57 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89315 00:18:58.657 15:45:57 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:58.657 15:45:57 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89315 00:18:58.657 15:45:57 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89315 ']' 00:18:58.657 15:45:57 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.657 15:45:57 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.657 15:45:57 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.657 15:45:57 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.657 15:45:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.657 [2024-11-25 15:45:57.291643] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:18:58.657 [2024-11-25 15:45:57.291845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89315 ] 00:18:58.916 [2024-11-25 15:45:57.469469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:58.917 [2024-11-25 15:45:57.578927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.917 [2024-11-25 15:45:57.578966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.857 15:45:58 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.857 15:45:58 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:18:59.857 15:45:58 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:59.857 15:45:58 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:59.857 15:45:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:59.857 15:45:58 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:59.857 15:45:58 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:59.857 15:45:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:59.857 15:45:58 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:59.857 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:59.857 ' 00:19:01.764 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:01.764 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:01.764 15:46:00 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:01.764 15:46:00 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:01.764 15:46:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:01.764 15:46:00 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:01.765 15:46:00 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:01.765 15:46:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:01.765 15:46:00 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:01.765 ' 00:19:02.703 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:02.703 15:46:01 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:02.703 15:46:01 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:02.703 15:46:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:02.703 15:46:01 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:02.703 15:46:01 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:02.703 15:46:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:02.703 15:46:01 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:02.703 15:46:01 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:03.274 15:46:01 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:03.274 15:46:01 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:03.274 15:46:01 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:03.274 15:46:01 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:03.274 15:46:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:03.274 15:46:01 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:03.274 15:46:01 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:03.274 15:46:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:03.274 15:46:01 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:03.274 ' 00:19:04.211 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:04.471 15:46:03 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:04.471 15:46:03 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:04.471 15:46:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:04.471 15:46:03 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:04.471 15:46:03 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:04.471 15:46:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:04.471 15:46:03 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:04.471 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:04.471 ' 00:19:05.853 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:05.853 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:06.112 15:46:04 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:06.112 15:46:04 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:06.112 15:46:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:06.112 15:46:04 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89315 00:19:06.112 15:46:04 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89315 ']' 00:19:06.112 15:46:04 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89315 00:19:06.112 15:46:04 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:19:06.112 15:46:04 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.112 15:46:04 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89315 00:19:06.112 15:46:04 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:06.112 15:46:04 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:06.112 15:46:04 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89315' 00:19:06.112 killing process with pid 89315 00:19:06.112 15:46:04 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89315 00:19:06.112 15:46:04 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89315 00:19:08.654 15:46:07 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:08.654 15:46:07 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89315 ']' 00:19:08.654 15:46:07 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89315 00:19:08.654 15:46:07 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89315 ']' 00:19:08.654 15:46:07 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89315 00:19:08.654 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89315) - No such process 00:19:08.654 15:46:07 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89315 is not found' 00:19:08.654 Process with pid 89315 is not found 00:19:08.654 15:46:07 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:08.654 15:46:07 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:08.654 15:46:07 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:08.654 15:46:07 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:08.654 00:19:08.654 real 0m10.185s 00:19:08.654 user 0m20.861s 00:19:08.654 sys 0m1.180s 00:19:08.654 15:46:07 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:08.654 15:46:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:08.654 ************************************ 00:19:08.654 END TEST spdkcli_raid 00:19:08.654 ************************************ 00:19:08.654 15:46:07 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:08.654 15:46:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:08.654 15:46:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.654 15:46:07 -- common/autotest_common.sh@10 -- # set +x 00:19:08.654 ************************************ 00:19:08.654 START TEST blockdev_raid5f 00:19:08.654 ************************************ 00:19:08.654 15:46:07 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:08.654 * Looking for test storage... 00:19:08.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:08.654 15:46:07 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:08.654 15:46:07 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:19:08.654 15:46:07 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:08.929 15:46:07 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:08.929 15:46:07 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:08.929 15:46:07 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:08.929 15:46:07 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:08.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.930 --rc genhtml_branch_coverage=1 00:19:08.930 --rc genhtml_function_coverage=1 00:19:08.930 --rc genhtml_legend=1 00:19:08.930 --rc geninfo_all_blocks=1 00:19:08.930 --rc geninfo_unexecuted_blocks=1 00:19:08.930 00:19:08.930 ' 00:19:08.930 15:46:07 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:08.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.930 --rc genhtml_branch_coverage=1 00:19:08.930 --rc genhtml_function_coverage=1 00:19:08.930 --rc genhtml_legend=1 00:19:08.930 --rc geninfo_all_blocks=1 00:19:08.930 --rc geninfo_unexecuted_blocks=1 00:19:08.930 00:19:08.930 ' 00:19:08.930 15:46:07 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:08.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.930 --rc genhtml_branch_coverage=1 00:19:08.930 --rc genhtml_function_coverage=1 00:19:08.930 --rc genhtml_legend=1 00:19:08.930 --rc geninfo_all_blocks=1 00:19:08.930 --rc geninfo_unexecuted_blocks=1 00:19:08.930 00:19:08.930 ' 00:19:08.930 15:46:07 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:08.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.930 --rc genhtml_branch_coverage=1 00:19:08.930 --rc genhtml_function_coverage=1 00:19:08.930 --rc genhtml_legend=1 00:19:08.930 --rc geninfo_all_blocks=1 00:19:08.930 --rc geninfo_unexecuted_blocks=1 00:19:08.930 00:19:08.930 ' 00:19:08.930 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:08.930 15:46:07 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:08.930 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:08.930 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:08.930 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:08.930 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:08.930 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:08.930 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:08.930 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:08.930 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:08.931 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:08.931 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:08.931 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:19:08.931 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:08.931 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:08.931 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:19:08.931 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:08.931 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:19:08.931 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:08.931 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:08.931 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:08.931 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:19:08.931 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:19:08.931 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:08.931 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89595 00:19:08.931 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:08.931 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:08.931 15:46:07 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89595 00:19:08.931 15:46:07 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89595 ']' 00:19:08.931 15:46:07 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.931 15:46:07 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.931 15:46:07 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.931 15:46:07 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.931 15:46:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:08.931 [2024-11-25 15:46:07.513519] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:19:08.931 [2024-11-25 15:46:07.513726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89595 ] 00:19:09.193 [2024-11-25 15:46:07.684770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.193 [2024-11-25 15:46:07.817582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.133 15:46:08 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.133 15:46:08 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:19:10.133 15:46:08 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:10.133 15:46:08 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:19:10.133 15:46:08 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:10.133 15:46:08 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.133 15:46:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:10.393 Malloc0 00:19:10.393 Malloc1 00:19:10.393 Malloc2 00:19:10.393 15:46:08 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.393 15:46:08 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:10.393 15:46:08 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.393 15:46:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:10.393 15:46:08 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.393 15:46:08 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:19:10.393 15:46:08 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:10.393 15:46:08 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.393 15:46:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:10.393 15:46:08 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.393 15:46:08 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:10.393 15:46:08 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.393 15:46:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:10.393 15:46:08 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.393 15:46:08 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:10.393 15:46:08 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.393 15:46:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:10.393 15:46:09 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.393 15:46:09 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:10.393 15:46:09 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:10.393 15:46:09 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:10.393 15:46:09 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.393 15:46:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:10.393 15:46:09 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.393 15:46:09 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:10.393 15:46:09 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "be9cdeb2-d7f5-485d-9c12-b067be15e7c0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "be9cdeb2-d7f5-485d-9c12-b067be15e7c0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "be9cdeb2-d7f5-485d-9c12-b067be15e7c0",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "e084bbed-6f14-4528-a2f1-8f97431c460f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "b075bd03-cb84-4119-a695-038c8c7539b3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "98f13446-24f9-42dc-93ae-e534d3fbeb08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:10.393 15:46:09 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:10.653 15:46:09 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:10.653 15:46:09 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:19:10.653 15:46:09 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:10.653 15:46:09 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 89595 00:19:10.653 15:46:09 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89595 ']' 00:19:10.653 15:46:09 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89595 00:19:10.653 15:46:09 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:19:10.653 15:46:09 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.653 15:46:09 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89595 00:19:10.653 killing process with pid 89595 00:19:10.653 15:46:09 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:10.653 15:46:09 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:10.653 15:46:09 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89595' 00:19:10.653 15:46:09 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89595 00:19:10.653 15:46:09 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89595 00:19:13.947 15:46:11 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:13.947 15:46:11 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:13.947 15:46:11 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:13.947 15:46:11 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.947 15:46:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:13.947 ************************************ 00:19:13.947 START TEST bdev_hello_world 00:19:13.947 ************************************ 00:19:13.947 15:46:11 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:13.947 [2024-11-25 15:46:12.022644] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:19:13.947 [2024-11-25 15:46:12.022814] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89663 ] 00:19:13.947 [2024-11-25 15:46:12.194708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.947 [2024-11-25 15:46:12.324663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.518 [2024-11-25 15:46:12.927041] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:14.518 [2024-11-25 15:46:12.927094] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:14.518 [2024-11-25 15:46:12.927125] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:14.518 [2024-11-25 15:46:12.927638] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:14.518 [2024-11-25 15:46:12.927792] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:14.518 [2024-11-25 15:46:12.927806] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:14.518 [2024-11-25 15:46:12.927853] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:14.518 00:19:14.518 [2024-11-25 15:46:12.927871] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:15.902 ************************************ 00:19:15.902 END TEST bdev_hello_world 00:19:15.902 ************************************ 00:19:15.902 00:19:15.902 real 0m2.415s 00:19:15.902 user 0m1.952s 00:19:15.902 sys 0m0.338s 00:19:15.902 15:46:14 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.902 15:46:14 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:15.902 15:46:14 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:15.902 15:46:14 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:15.902 15:46:14 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.902 15:46:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:15.902 ************************************ 00:19:15.902 START TEST bdev_bounds 00:19:15.902 ************************************ 00:19:15.902 15:46:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:15.902 15:46:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89711 00:19:15.902 15:46:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:15.902 15:46:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:15.902 15:46:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89711' 00:19:15.902 Process bdevio pid: 89711 00:19:15.902 15:46:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89711 00:19:15.902 15:46:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 89711 ']' 00:19:15.902 15:46:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.902 15:46:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.902 15:46:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.902 15:46:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.902 15:46:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:15.902 [2024-11-25 15:46:14.519354] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:19:15.902 [2024-11-25 15:46:14.519474] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89711 ] 00:19:16.162 [2024-11-25 15:46:14.700777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:16.162 [2024-11-25 15:46:14.834837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.162 [2024-11-25 15:46:14.835091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.162 [2024-11-25 15:46:14.835099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.101 15:46:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.101 15:46:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:17.101 15:46:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:17.101 I/O targets: 00:19:17.101 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:17.101 00:19:17.101 00:19:17.101 CUnit - A unit testing framework for C - Version 2.1-3 00:19:17.101 http://cunit.sourceforge.net/ 00:19:17.101 00:19:17.101 00:19:17.101 Suite: bdevio tests on: raid5f 00:19:17.101 Test: blockdev write read block ...passed 00:19:17.101 Test: blockdev write zeroes read block ...passed 00:19:17.101 Test: blockdev write zeroes read no split ...passed 00:19:17.101 Test: blockdev write zeroes read split ...passed 00:19:17.361 Test: blockdev write zeroes read split partial ...passed 00:19:17.361 Test: blockdev reset ...passed 00:19:17.361 Test: blockdev write read 8 blocks ...passed 00:19:17.361 Test: blockdev write read size > 128k ...passed 00:19:17.361 Test: blockdev write read invalid size ...passed 00:19:17.361 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:17.361 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:17.361 Test: blockdev write read max offset ...passed 00:19:17.361 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:17.361 Test: blockdev writev readv 8 blocks ...passed 00:19:17.361 Test: blockdev writev readv 30 x 1block ...passed 00:19:17.361 Test: blockdev writev readv block ...passed 00:19:17.361 Test: blockdev writev readv size > 128k ...passed 00:19:17.361 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:17.361 Test: blockdev comparev and writev ...passed 00:19:17.361 Test: blockdev nvme passthru rw ...passed 00:19:17.361 Test: blockdev nvme passthru vendor specific ...passed 00:19:17.361 Test: blockdev nvme admin passthru ...passed 00:19:17.361 Test: blockdev copy ...passed 00:19:17.361 00:19:17.361 Run Summary: Type Total Ran Passed Failed Inactive 00:19:17.361 suites 1 1 n/a 0 0 00:19:17.361 tests 23 23 23 0 0 00:19:17.361 asserts 130 130 130 0 n/a 00:19:17.361 00:19:17.361 Elapsed time = 0.617 seconds 00:19:17.361 0 00:19:17.361 15:46:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89711 00:19:17.361 15:46:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 89711 ']' 00:19:17.361 15:46:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 89711 00:19:17.361 15:46:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:17.361 15:46:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.361 15:46:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89711 00:19:17.361 15:46:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.361 15:46:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.361 15:46:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89711' 00:19:17.361 killing process with pid 89711 00:19:17.361 15:46:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 89711 00:19:17.361 15:46:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 89711 00:19:18.743 15:46:17 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:18.743 00:19:18.743 real 0m2.918s 00:19:18.743 user 0m7.163s 00:19:18.743 sys 0m0.469s 00:19:18.743 15:46:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.743 15:46:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:18.743 ************************************ 00:19:18.743 END TEST bdev_bounds 00:19:18.743 ************************************ 00:19:18.743 15:46:17 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:18.743 15:46:17 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:18.743 15:46:17 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.743 15:46:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:18.743 ************************************ 00:19:18.743 START TEST bdev_nbd 00:19:18.743 ************************************ 00:19:18.743 15:46:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:18.743 15:46:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=89770 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:19.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 89770 /var/tmp/spdk-nbd.sock 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 89770 ']' 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.003 15:46:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:19.003 [2024-11-25 15:46:17.514421] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:19:19.003 [2024-11-25 15:46:17.514585] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.263 [2024-11-25 15:46:17.690931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.263 [2024-11-25 15:46:17.823286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.833 15:46:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.833 15:46:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:19.833 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:19.833 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:19.833 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:19.833 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:19.833 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:19.833 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:19.833 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:19.833 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:19.833 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:19.833 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:19.833 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:19.833 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:19.833 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:20.093 1+0 records in 00:19:20.093 1+0 records out 00:19:20.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417239 s, 9.8 MB/s 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:20.093 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:20.353 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:20.353 { 00:19:20.353 "nbd_device": "/dev/nbd0", 00:19:20.353 "bdev_name": "raid5f" 00:19:20.353 } 00:19:20.353 ]' 00:19:20.353 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:20.353 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:20.353 { 00:19:20.353 "nbd_device": "/dev/nbd0", 00:19:20.353 "bdev_name": "raid5f" 00:19:20.353 } 00:19:20.353 ]' 00:19:20.353 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:20.353 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:20.353 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:20.353 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:20.353 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:20.353 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:20.353 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:20.353 15:46:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:20.613 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:20.613 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:20.613 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:20.613 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:20.613 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:20.613 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:20.613 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:20.613 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:20.613 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:20.613 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:20.613 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:20.872 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:21.175 /dev/nbd0 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:21.176 1+0 records in 00:19:21.176 1+0 records out 00:19:21.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431787 s, 9.5 MB/s 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:21.176 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:21.446 { 00:19:21.446 "nbd_device": "/dev/nbd0", 00:19:21.446 "bdev_name": "raid5f" 00:19:21.446 } 00:19:21.446 ]' 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:21.446 { 00:19:21.446 "nbd_device": "/dev/nbd0", 00:19:21.446 "bdev_name": "raid5f" 00:19:21.446 } 00:19:21.446 ]' 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:21.446 256+0 records in 00:19:21.446 256+0 records out 00:19:21.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123637 s, 84.8 MB/s 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:21.446 256+0 records in 00:19:21.446 256+0 records out 00:19:21.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304228 s, 34.5 MB/s 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:21.446 15:46:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:21.446 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:21.446 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:21.446 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:21.446 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:21.446 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:21.446 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:21.446 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:21.446 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:21.707 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:21.707 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:21.707 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:21.707 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:21.707 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:21.707 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:21.707 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:21.707 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:21.707 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:21.707 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:21.707 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:21.968 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:21.968 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:21.968 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:21.968 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:21.968 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:21.968 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:21.968 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:21.968 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:21.968 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:21.968 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:21.968 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:21.968 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:21.968 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:21.968 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:21.968 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:21.968 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:22.228 malloc_lvol_verify 00:19:22.228 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:22.228 8cbba0df-df63-4cac-a6e9-cd8509799c0f 00:19:22.489 15:46:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:22.489 daec927e-ecd2-4919-83a7-8da69146fc47 00:19:22.489 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:22.749 /dev/nbd0 00:19:22.749 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:22.749 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:22.749 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:22.749 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:22.749 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:22.749 mke2fs 1.47.0 (5-Feb-2023) 00:19:22.749 Discarding device blocks: 0/4096 done 00:19:22.749 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:22.749 00:19:22.749 Allocating group tables: 0/1 done 00:19:22.749 Writing inode tables: 0/1 done 00:19:22.749 Creating journal (1024 blocks): done 00:19:22.749 Writing superblocks and filesystem accounting information: 0/1 done 00:19:22.749 00:19:22.749 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:22.749 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:22.749 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:22.749 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:22.749 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:22.749 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:22.749 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:23.010 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:23.010 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:23.010 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:23.010 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:23.010 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:23.010 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:23.010 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:23.010 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:23.010 15:46:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 89770 00:19:23.010 15:46:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 89770 ']' 00:19:23.010 15:46:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 89770 00:19:23.010 15:46:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:23.010 15:46:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.010 15:46:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89770 00:19:23.010 15:46:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:23.010 15:46:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:23.010 killing process with pid 89770 00:19:23.010 15:46:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89770' 00:19:23.010 15:46:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 89770 00:19:23.010 15:46:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 89770 00:19:24.923 15:46:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:24.923 00:19:24.923 real 0m5.720s 00:19:24.923 user 0m7.602s 00:19:24.923 sys 0m1.334s 00:19:24.923 15:46:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.923 15:46:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:24.923 ************************************ 00:19:24.923 END TEST bdev_nbd 00:19:24.923 ************************************ 00:19:24.923 15:46:23 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:24.923 15:46:23 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:24.923 15:46:23 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:24.923 15:46:23 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:24.923 15:46:23 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:24.923 15:46:23 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.923 15:46:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:24.923 ************************************ 00:19:24.923 START TEST bdev_fio 00:19:24.923 ************************************ 00:19:24.923 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:24.923 15:46:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:24.923 15:46:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:24.923 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:24.923 15:46:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:24.923 15:46:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:24.923 15:46:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:24.923 15:46:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:24.923 15:46:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:24.924 ************************************ 00:19:24.924 START TEST bdev_fio_rw_verify 00:19:24.924 ************************************ 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:24.924 15:46:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:25.184 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:25.184 fio-3.35 00:19:25.184 Starting 1 thread 00:19:37.408 00:19:37.408 job_raid5f: (groupid=0, jobs=1): err= 0: pid=89979: Mon Nov 25 15:46:34 2024 00:19:37.408 read: IOPS=11.9k, BW=46.6MiB/s (48.9MB/s)(466MiB/10001msec) 00:19:37.408 slat (usec): min=17, max=112, avg=19.58, stdev= 2.68 00:19:37.408 clat (usec): min=11, max=618, avg=134.87, stdev=47.55 00:19:37.408 lat (usec): min=31, max=674, avg=154.45, stdev=48.12 00:19:37.408 clat percentiles (usec): 00:19:37.408 | 50.000th=[ 137], 99.000th=[ 227], 99.900th=[ 285], 99.990th=[ 537], 00:19:37.408 | 99.999th=[ 594] 00:19:37.408 write: IOPS=12.5k, BW=48.8MiB/s (51.2MB/s)(482MiB/9874msec); 0 zone resets 00:19:37.408 slat (usec): min=8, max=220, avg=17.17, stdev= 4.49 00:19:37.408 clat (usec): min=62, max=1779, avg=309.14, stdev=50.57 00:19:37.408 lat (usec): min=78, max=1797, avg=326.31, stdev=52.21 00:19:37.408 clat percentiles (usec): 00:19:37.408 | 50.000th=[ 314], 99.000th=[ 396], 99.900th=[ 832], 99.990th=[ 1598], 00:19:37.408 | 99.999th=[ 1762] 00:19:37.408 bw ( KiB/s): min=47112, max=51576, per=98.80%, avg=49381.89, stdev=1434.53, samples=19 00:19:37.408 iops : min=11778, max=12894, avg=12345.47, stdev=358.63, samples=19 00:19:37.408 lat (usec) : 20=0.01%, 50=0.01%, 100=13.93%, 250=39.83%, 500=46.13% 00:19:37.408 lat (usec) : 750=0.05%, 1000=0.03% 00:19:37.408 lat (msec) : 2=0.03% 00:19:37.408 cpu : usr=98.66%, sys=0.49%, ctx=24, majf=0, minf=9810 00:19:37.408 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:37.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.408 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.408 issued rwts: total=119307,123383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.408 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:37.408 00:19:37.408 Run status group 0 (all jobs): 00:19:37.408 READ: bw=46.6MiB/s (48.9MB/s), 46.6MiB/s-46.6MiB/s (48.9MB/s-48.9MB/s), io=466MiB (489MB), run=10001-10001msec 00:19:37.408 WRITE: bw=48.8MiB/s (51.2MB/s), 48.8MiB/s-48.8MiB/s (51.2MB/s-51.2MB/s), io=482MiB (505MB), run=9874-9874msec 00:19:37.668 ----------------------------------------------------- 00:19:37.668 Suppressions used: 00:19:37.668 count bytes template 00:19:37.668 1 7 /usr/src/fio/parse.c 00:19:37.668 386 37056 /usr/src/fio/iolog.c 00:19:37.668 1 8 libtcmalloc_minimal.so 00:19:37.668 1 904 libcrypto.so 00:19:37.668 ----------------------------------------------------- 00:19:37.668 00:19:37.668 00:19:37.668 real 0m12.894s 00:19:37.668 user 0m13.015s 00:19:37.668 sys 0m0.771s 00:19:37.668 15:46:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:37.668 15:46:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:37.668 ************************************ 00:19:37.668 END TEST bdev_fio_rw_verify 00:19:37.668 ************************************ 00:19:37.668 15:46:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:37.668 15:46:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:37.668 15:46:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:37.668 15:46:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:37.668 15:46:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:37.668 15:46:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:37.668 15:46:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:37.668 15:46:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:37.668 15:46:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:37.668 15:46:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:37.668 15:46:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:37.668 15:46:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:37.668 15:46:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:37.928 15:46:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:37.928 15:46:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:37.928 15:46:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:37.928 15:46:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "be9cdeb2-d7f5-485d-9c12-b067be15e7c0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "be9cdeb2-d7f5-485d-9c12-b067be15e7c0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "be9cdeb2-d7f5-485d-9c12-b067be15e7c0",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "e084bbed-6f14-4528-a2f1-8f97431c460f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "b075bd03-cb84-4119-a695-038c8c7539b3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "98f13446-24f9-42dc-93ae-e534d3fbeb08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:37.928 15:46:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:37.928 15:46:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:37.928 15:46:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:37.928 15:46:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:37.928 /home/vagrant/spdk_repo/spdk 00:19:37.928 15:46:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:37.928 15:46:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:37.928 00:19:37.928 real 0m13.202s 00:19:37.928 user 0m13.141s 00:19:37.928 sys 0m0.919s 00:19:37.928 15:46:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:37.928 15:46:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:37.928 ************************************ 00:19:37.928 END TEST bdev_fio 00:19:37.928 ************************************ 00:19:37.928 15:46:36 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:37.928 15:46:36 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:37.928 15:46:36 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:37.928 15:46:36 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:37.928 15:46:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:37.928 ************************************ 00:19:37.928 START TEST bdev_verify 00:19:37.928 ************************************ 00:19:37.928 15:46:36 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:37.928 [2024-11-25 15:46:36.573562] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:19:37.928 [2024-11-25 15:46:36.573696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90145 ] 00:19:38.189 [2024-11-25 15:46:36.749081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:38.449 [2024-11-25 15:46:36.894583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.449 [2024-11-25 15:46:36.894615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.019 Running I/O for 5 seconds... 00:19:40.898 10731.00 IOPS, 41.92 MiB/s [2024-11-25T15:46:40.515Z] 10894.50 IOPS, 42.56 MiB/s [2024-11-25T15:46:41.897Z] 10881.67 IOPS, 42.51 MiB/s [2024-11-25T15:46:42.838Z] 10901.50 IOPS, 42.58 MiB/s [2024-11-25T15:46:42.838Z] 10913.40 IOPS, 42.63 MiB/s 00:19:44.157 Latency(us) 00:19:44.157 [2024-11-25T15:46:42.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.157 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:44.157 Verification LBA range: start 0x0 length 0x2000 00:19:44.157 raid5f : 5.02 6491.59 25.36 0.00 0.00 29737.39 253.99 21749.94 00:19:44.157 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:44.157 Verification LBA range: start 0x2000 length 0x2000 00:19:44.157 raid5f : 5.01 4419.98 17.27 0.00 0.00 43566.78 965.87 31823.59 00:19:44.157 [2024-11-25T15:46:42.838Z] =================================================================================================================== 00:19:44.157 [2024-11-25T15:46:42.838Z] Total : 10911.57 42.62 0.00 0.00 35338.34 253.99 31823.59 00:19:45.541 00:19:45.541 real 0m7.438s 00:19:45.541 user 0m13.644s 00:19:45.541 sys 0m0.370s 00:19:45.541 15:46:43 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.541 15:46:43 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:45.541 ************************************ 00:19:45.541 END TEST bdev_verify 00:19:45.541 ************************************ 00:19:45.541 15:46:43 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:45.541 15:46:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:45.541 15:46:43 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.541 15:46:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:45.541 ************************************ 00:19:45.541 START TEST bdev_verify_big_io 00:19:45.541 ************************************ 00:19:45.541 15:46:43 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:45.541 [2024-11-25 15:46:44.080153] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:19:45.541 [2024-11-25 15:46:44.080277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90246 ] 00:19:45.801 [2024-11-25 15:46:44.253139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:45.801 [2024-11-25 15:46:44.389555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.801 [2024-11-25 15:46:44.389583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.371 Running I/O for 5 seconds... 00:19:48.692 633.00 IOPS, 39.56 MiB/s [2024-11-25T15:46:48.314Z] 761.00 IOPS, 47.56 MiB/s [2024-11-25T15:46:49.254Z] 761.33 IOPS, 47.58 MiB/s [2024-11-25T15:46:50.192Z] 793.25 IOPS, 49.58 MiB/s [2024-11-25T15:46:50.454Z] 799.00 IOPS, 49.94 MiB/s 00:19:51.773 Latency(us) 00:19:51.773 [2024-11-25T15:46:50.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.773 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:51.773 Verification LBA range: start 0x0 length 0x200 00:19:51.773 raid5f : 5.23 448.72 28.04 0.00 0.00 7069597.81 165.45 313199.12 00:19:51.773 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:51.773 Verification LBA range: start 0x200 length 0x200 00:19:51.773 raid5f : 5.26 349.86 21.87 0.00 0.00 8967793.80 197.65 386462.07 00:19:51.773 [2024-11-25T15:46:50.454Z] =================================================================================================================== 00:19:51.773 [2024-11-25T15:46:50.454Z] Total : 798.57 49.91 0.00 0.00 7903571.21 165.45 386462.07 00:19:53.161 00:19:53.161 real 0m7.686s 00:19:53.161 user 0m14.182s 00:19:53.161 sys 0m0.345s 00:19:53.161 15:46:51 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.161 15:46:51 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:53.161 ************************************ 00:19:53.161 END TEST bdev_verify_big_io 00:19:53.161 ************************************ 00:19:53.161 15:46:51 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:53.161 15:46:51 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:53.161 15:46:51 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:53.161 15:46:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:53.161 ************************************ 00:19:53.161 START TEST bdev_write_zeroes 00:19:53.161 ************************************ 00:19:53.161 15:46:51 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:53.161 [2024-11-25 15:46:51.838588] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:19:53.161 [2024-11-25 15:46:51.838694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90346 ] 00:19:53.421 [2024-11-25 15:46:52.011953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.681 [2024-11-25 15:46:52.143793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.252 Running I/O for 1 seconds... 00:19:55.191 29967.00 IOPS, 117.06 MiB/s 00:19:55.191 Latency(us) 00:19:55.191 [2024-11-25T15:46:53.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.191 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:55.191 raid5f : 1.01 29943.92 116.97 0.00 0.00 4261.54 1330.75 5809.52 00:19:55.191 [2024-11-25T15:46:53.872Z] =================================================================================================================== 00:19:55.191 [2024-11-25T15:46:53.872Z] Total : 29943.92 116.97 0.00 0.00 4261.54 1330.75 5809.52 00:19:56.574 00:19:56.574 real 0m3.444s 00:19:56.574 user 0m2.972s 00:19:56.574 sys 0m0.344s 00:19:56.574 15:46:55 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:56.574 15:46:55 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:56.574 ************************************ 00:19:56.574 END TEST bdev_write_zeroes 00:19:56.574 ************************************ 00:19:56.834 15:46:55 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:56.834 15:46:55 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:56.834 15:46:55 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:56.834 15:46:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:56.834 ************************************ 00:19:56.834 START TEST bdev_json_nonenclosed 00:19:56.834 ************************************ 00:19:56.834 15:46:55 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:56.834 [2024-11-25 15:46:55.360247] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:19:56.834 [2024-11-25 15:46:55.360364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90402 ] 00:19:57.094 [2024-11-25 15:46:55.537538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.094 [2024-11-25 15:46:55.664312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.094 [2024-11-25 15:46:55.664417] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:57.094 [2024-11-25 15:46:55.664445] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:57.094 [2024-11-25 15:46:55.664456] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:57.354 00:19:57.354 real 0m0.652s 00:19:57.354 user 0m0.392s 00:19:57.354 sys 0m0.156s 00:19:57.354 15:46:55 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:57.354 15:46:55 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:57.354 ************************************ 00:19:57.354 END TEST bdev_json_nonenclosed 00:19:57.354 ************************************ 00:19:57.354 15:46:55 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:57.354 15:46:55 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:57.354 15:46:55 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.354 15:46:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:57.354 ************************************ 00:19:57.354 START TEST bdev_json_nonarray 00:19:57.354 ************************************ 00:19:57.354 15:46:55 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:57.614 [2024-11-25 15:46:56.091712] Starting SPDK v25.01-pre git sha1 ff2e6bfe4 / DPDK 24.03.0 initialization... 00:19:57.614 [2024-11-25 15:46:56.091852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90429 ] 00:19:57.614 [2024-11-25 15:46:56.269303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.874 [2024-11-25 15:46:56.407241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.874 [2024-11-25 15:46:56.407352] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:57.874 [2024-11-25 15:46:56.407371] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:57.874 [2024-11-25 15:46:56.407390] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:58.134 00:19:58.134 real 0m0.675s 00:19:58.134 user 0m0.417s 00:19:58.134 sys 0m0.152s 00:19:58.134 15:46:56 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:58.134 15:46:56 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:58.134 ************************************ 00:19:58.134 END TEST bdev_json_nonarray 00:19:58.134 ************************************ 00:19:58.134 15:46:56 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:19:58.134 15:46:56 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:19:58.134 15:46:56 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:19:58.134 15:46:56 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:19:58.134 15:46:56 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:19:58.134 15:46:56 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:58.134 15:46:56 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:58.134 15:46:56 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:58.134 15:46:56 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:58.134 15:46:56 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:58.134 15:46:56 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:58.134 ************************************ 00:19:58.134 END TEST blockdev_raid5f 00:19:58.134 ************************************ 00:19:58.134 00:19:58.134 real 0m49.574s 00:19:58.134 user 1m6.094s 00:19:58.134 sys 0m5.703s 00:19:58.134 15:46:56 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:58.134 15:46:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:58.135 15:46:56 -- spdk/autotest.sh@194 -- # uname -s 00:19:58.135 15:46:56 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:58.135 15:46:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:58.135 15:46:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:58.135 15:46:56 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:58.135 15:46:56 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:58.135 15:46:56 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:58.135 15:46:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:58.135 15:46:56 -- common/autotest_common.sh@10 -- # set +x 00:19:58.395 15:46:56 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:58.395 15:46:56 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:58.395 15:46:56 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:58.395 15:46:56 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:58.395 15:46:56 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:58.395 15:46:56 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:58.395 15:46:56 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:58.395 15:46:56 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:58.395 15:46:56 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:58.395 15:46:56 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:58.395 15:46:56 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:58.395 15:46:56 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:58.395 15:46:56 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:58.395 15:46:56 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:58.395 15:46:56 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:58.395 15:46:56 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:58.395 15:46:56 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:58.395 15:46:56 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:58.395 15:46:56 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:58.395 15:46:56 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:58.395 15:46:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:58.395 15:46:56 -- common/autotest_common.sh@10 -- # set +x 00:19:58.395 15:46:56 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:58.395 15:46:56 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:58.395 15:46:56 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:58.395 15:46:56 -- common/autotest_common.sh@10 -- # set +x 00:20:00.936 INFO: APP EXITING 00:20:00.936 INFO: killing all VMs 00:20:00.936 INFO: killing vhost app 00:20:00.936 INFO: EXIT DONE 00:20:01.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:01.196 Waiting for block devices as requested 00:20:01.196 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:01.456 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:02.398 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:02.398 Cleaning 00:20:02.398 Removing: /var/run/dpdk/spdk0/config 00:20:02.398 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:02.398 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:02.398 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:02.398 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:02.398 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:02.398 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:02.398 Removing: /dev/shm/spdk_tgt_trace.pid56822 00:20:02.398 Removing: /var/run/dpdk/spdk0 00:20:02.398 Removing: /var/run/dpdk/spdk_pid56591 00:20:02.398 Removing: /var/run/dpdk/spdk_pid56822 00:20:02.398 Removing: /var/run/dpdk/spdk_pid57051 00:20:02.398 Removing: /var/run/dpdk/spdk_pid57156 00:20:02.398 Removing: /var/run/dpdk/spdk_pid57201 00:20:02.398 Removing: /var/run/dpdk/spdk_pid57340 00:20:02.398 Removing: /var/run/dpdk/spdk_pid57358 00:20:02.398 Removing: /var/run/dpdk/spdk_pid57568 00:20:02.398 Removing: /var/run/dpdk/spdk_pid57674 00:20:02.398 Removing: /var/run/dpdk/spdk_pid57781 00:20:02.398 Removing: /var/run/dpdk/spdk_pid57903 00:20:02.398 Removing: /var/run/dpdk/spdk_pid58006 00:20:02.398 Removing: /var/run/dpdk/spdk_pid58051 00:20:02.398 Removing: /var/run/dpdk/spdk_pid58082 00:20:02.398 Removing: /var/run/dpdk/spdk_pid58158 00:20:02.398 Removing: /var/run/dpdk/spdk_pid58264 00:20:02.398 Removing: /var/run/dpdk/spdk_pid58711 00:20:02.398 Removing: /var/run/dpdk/spdk_pid58781 00:20:02.398 Removing: /var/run/dpdk/spdk_pid58855 00:20:02.398 Removing: /var/run/dpdk/spdk_pid58871 00:20:02.398 Removing: /var/run/dpdk/spdk_pid59019 00:20:02.398 Removing: /var/run/dpdk/spdk_pid59035 00:20:02.398 Removing: /var/run/dpdk/spdk_pid59181 00:20:02.398 Removing: /var/run/dpdk/spdk_pid59202 00:20:02.398 Removing: /var/run/dpdk/spdk_pid59266 00:20:02.398 Removing: /var/run/dpdk/spdk_pid59284 00:20:02.398 Removing: /var/run/dpdk/spdk_pid59356 00:20:02.398 Removing: /var/run/dpdk/spdk_pid59374 00:20:02.398 Removing: /var/run/dpdk/spdk_pid59569 00:20:02.398 Removing: /var/run/dpdk/spdk_pid59611 00:20:02.398 Removing: /var/run/dpdk/spdk_pid59700 00:20:02.398 Removing: /var/run/dpdk/spdk_pid61021 00:20:02.398 Removing: /var/run/dpdk/spdk_pid61227 00:20:02.398 Removing: /var/run/dpdk/spdk_pid61373 00:20:02.398 Removing: /var/run/dpdk/spdk_pid62005 00:20:02.398 Removing: /var/run/dpdk/spdk_pid62217 00:20:02.398 Removing: /var/run/dpdk/spdk_pid62357 00:20:02.658 Removing: /var/run/dpdk/spdk_pid62994 00:20:02.658 Removing: /var/run/dpdk/spdk_pid63319 00:20:02.658 Removing: /var/run/dpdk/spdk_pid63459 00:20:02.658 Removing: /var/run/dpdk/spdk_pid64844 00:20:02.658 Removing: /var/run/dpdk/spdk_pid65103 00:20:02.658 Removing: /var/run/dpdk/spdk_pid65243 00:20:02.658 Removing: /var/run/dpdk/spdk_pid66629 00:20:02.658 Removing: /var/run/dpdk/spdk_pid66882 00:20:02.658 Removing: /var/run/dpdk/spdk_pid67030 00:20:02.658 Removing: /var/run/dpdk/spdk_pid68409 00:20:02.658 Removing: /var/run/dpdk/spdk_pid68855 00:20:02.658 Removing: /var/run/dpdk/spdk_pid68995 00:20:02.658 Removing: /var/run/dpdk/spdk_pid70475 00:20:02.658 Removing: /var/run/dpdk/spdk_pid70734 00:20:02.658 Removing: /var/run/dpdk/spdk_pid70874 00:20:02.658 Removing: /var/run/dpdk/spdk_pid72354 00:20:02.658 Removing: /var/run/dpdk/spdk_pid72613 00:20:02.658 Removing: /var/run/dpdk/spdk_pid72764 00:20:02.658 Removing: /var/run/dpdk/spdk_pid74246 00:20:02.658 Removing: /var/run/dpdk/spdk_pid74733 00:20:02.659 Removing: /var/run/dpdk/spdk_pid74873 00:20:02.659 Removing: /var/run/dpdk/spdk_pid75017 00:20:02.659 Removing: /var/run/dpdk/spdk_pid75430 00:20:02.659 Removing: /var/run/dpdk/spdk_pid76149 00:20:02.659 Removing: /var/run/dpdk/spdk_pid76525 00:20:02.659 Removing: /var/run/dpdk/spdk_pid77208 00:20:02.659 Removing: /var/run/dpdk/spdk_pid77655 00:20:02.659 Removing: /var/run/dpdk/spdk_pid78397 00:20:02.659 Removing: /var/run/dpdk/spdk_pid78807 00:20:02.659 Removing: /var/run/dpdk/spdk_pid80760 00:20:02.659 Removing: /var/run/dpdk/spdk_pid81198 00:20:02.659 Removing: /var/run/dpdk/spdk_pid81641 00:20:02.659 Removing: /var/run/dpdk/spdk_pid83715 00:20:02.659 Removing: /var/run/dpdk/spdk_pid84201 00:20:02.659 Removing: /var/run/dpdk/spdk_pid84726 00:20:02.659 Removing: /var/run/dpdk/spdk_pid85780 00:20:02.659 Removing: /var/run/dpdk/spdk_pid86108 00:20:02.659 Removing: /var/run/dpdk/spdk_pid87041 00:20:02.659 Removing: /var/run/dpdk/spdk_pid87369 00:20:02.659 Removing: /var/run/dpdk/spdk_pid88309 00:20:02.659 Removing: /var/run/dpdk/spdk_pid88632 00:20:02.659 Removing: /var/run/dpdk/spdk_pid89315 00:20:02.659 Removing: /var/run/dpdk/spdk_pid89595 00:20:02.659 Removing: /var/run/dpdk/spdk_pid89663 00:20:02.659 Removing: /var/run/dpdk/spdk_pid89711 00:20:02.659 Removing: /var/run/dpdk/spdk_pid89964 00:20:02.659 Removing: /var/run/dpdk/spdk_pid90145 00:20:02.659 Removing: /var/run/dpdk/spdk_pid90246 00:20:02.659 Removing: /var/run/dpdk/spdk_pid90346 00:20:02.659 Removing: /var/run/dpdk/spdk_pid90402 00:20:02.659 Removing: /var/run/dpdk/spdk_pid90429 00:20:02.659 Clean 00:20:02.919 15:47:01 -- common/autotest_common.sh@1453 -- # return 0 00:20:02.919 15:47:01 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:02.919 15:47:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:02.919 15:47:01 -- common/autotest_common.sh@10 -- # set +x 00:20:02.919 15:47:01 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:02.919 15:47:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:02.919 15:47:01 -- common/autotest_common.sh@10 -- # set +x 00:20:02.919 15:47:01 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:02.919 15:47:01 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:02.919 15:47:01 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:02.919 15:47:01 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:02.919 15:47:01 -- spdk/autotest.sh@398 -- # hostname 00:20:02.919 15:47:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:03.178 geninfo: WARNING: invalid characters removed from testname! 00:20:25.138 15:47:21 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:26.076 15:47:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:28.017 15:47:26 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:29.929 15:47:28 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:31.854 15:47:30 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:34.433 15:47:32 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:36.344 15:47:34 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:36.344 15:47:34 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:36.344 15:47:34 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:36.344 15:47:34 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:36.344 15:47:34 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:36.344 15:47:34 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:36.344 + [[ -n 5424 ]] 00:20:36.344 + sudo kill 5424 00:20:36.354 [Pipeline] } 00:20:36.371 [Pipeline] // timeout 00:20:36.377 [Pipeline] } 00:20:36.392 [Pipeline] // stage 00:20:36.399 [Pipeline] } 00:20:36.414 [Pipeline] // catchError 00:20:36.425 [Pipeline] stage 00:20:36.427 [Pipeline] { (Stop VM) 00:20:36.441 [Pipeline] sh 00:20:36.730 + vagrant halt 00:20:39.272 ==> default: Halting domain... 00:20:47.421 [Pipeline] sh 00:20:47.704 + vagrant destroy -f 00:20:50.245 ==> default: Removing domain... 00:20:50.258 [Pipeline] sh 00:20:50.544 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:50.554 [Pipeline] } 00:20:50.568 [Pipeline] // stage 00:20:50.574 [Pipeline] } 00:20:50.588 [Pipeline] // dir 00:20:50.594 [Pipeline] } 00:20:50.608 [Pipeline] // wrap 00:20:50.614 [Pipeline] } 00:20:50.627 [Pipeline] // catchError 00:20:50.637 [Pipeline] stage 00:20:50.640 [Pipeline] { (Epilogue) 00:20:50.652 [Pipeline] sh 00:20:50.937 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:55.152 [Pipeline] catchError 00:20:55.154 [Pipeline] { 00:20:55.167 [Pipeline] sh 00:20:55.453 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:55.453 Artifacts sizes are good 00:20:55.463 [Pipeline] } 00:20:55.478 [Pipeline] // catchError 00:20:55.489 [Pipeline] archiveArtifacts 00:20:55.508 Archiving artifacts 00:20:55.638 [Pipeline] cleanWs 00:20:55.650 [WS-CLEANUP] Deleting project workspace... 00:20:55.650 [WS-CLEANUP] Deferred wipeout is used... 00:20:55.657 [WS-CLEANUP] done 00:20:55.658 [Pipeline] } 00:20:55.673 [Pipeline] // stage 00:20:55.679 [Pipeline] } 00:20:55.693 [Pipeline] // node 00:20:55.698 [Pipeline] End of Pipeline 00:20:55.737 Finished: SUCCESS